Next Article in Journal
Smart Paths to Sustainable Agriculture: Digitalization, Clean Energy, and the Decline of Carbon Emission Intensity in China’s Rural Sector
Previous Article in Journal
Spatial–Temporal Evolution and Driving Forces of Green Development Efficiency in Resource-Based Cities of the Yellow River Basin
Previous Article in Special Issue
Bridging Food Justice and Management: A Pathway to Sustainable and Equitable Food Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Project Report

Computer Vision-Based Monitoring and Data Integration in a Multi-Trophic Controlled-Environment Agriculture Demonstrator

1
Acheron GmbH, Auf der Muggenburg 30, D-28217 Bremen, Germany
2
Institute for Automation and Applied Informatics (IAI), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen, Germany
3
Institute for Biological Interfaces 1 (IBG-1), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen, Germany
*
Author to whom correspondence should be addressed.
Sustainability 2026, 18(6), 2700; https://doi.org/10.3390/su18062700
Submission received: 15 January 2026 / Revised: 1 March 2026 / Accepted: 5 March 2026 / Published: 10 March 2026
(This article belongs to the Special Issue Food Science and Engineering for Sustainability—2nd Edition)

Abstract

Controlled-environment agriculture (CEA) and circular production systems require coordinated monitoring of biological and physicochemical processes across trophic levels. This project report presents the implementation of a multi-trophic controlled-environment agriculture demonstrator that integrates computer-vision-based monitoring with established sensor infrastructure for aquaculture, poultry, plants, microalgae, duckweed, and insect modules. Stereo imaging and RGB-D systems are deployed for non-invasive quantification of fish biomass and plant growth, while continuous water-quality and environmental measurements (e.g., pH, dissolved oxygen, nitrate, ammonium, temperature, CO2) provide complementary process data. These data streams are synchronized within a shared database architecture to enable cross-module evaluation of nutrient dynamics, growth progression, and operational stability under real facility conditions. The implemented framework demonstrates how computer vision can extend conventional sensor-based monitoring by directly capturing biological performance indicators across aquatic, terrestrial, and microbial domains. While advanced predictive modeling and full digital twin simulation remain future development steps, the realized data-integration architecture establishes a structural foundation for the systematic evaluation of circular indoor food-production systems. The demonstrator illustrates how multimodal monitoring can support nutrient recirculation, transparency of biological variability, and data-driven assessment within controlled multi-trophic environments.

1. Introduction

In the face of climate change, water scarcity, and rising global food demand, agricultural systems must shift toward resource-efficient closed-loop processes [1]. Controlled environment agriculture (CEA) in urban settings offers a particularly promising pathway, enabling minimal resource losses and stable yields under carefully regulated conditions [2,3,4]. CEA offers resilient food production but remains limited by high energy use and costs, requiring transdisciplinary innovation that integrates life-cycle analysis, digitalization, flexible energy management, and engineered biological systems to enhance sustainability [5,6,7]. Key to sustainable systems are the recycling of nutrients, the recovery of water, and the stabilization of microbial processes that drive decomposition, nutrient mobilization, and resilience [8].
The idea of coupling animal and plant production has a long tradition, ranging from manure fertilization to integrated crop–livestock systems. Today, novel approaches such as aquaponics combine aquaculture and hydroponics to recycle fish excrement into nutrients for plants, while plants purify the water for fish [9,10]. Beyond plant–fish coupling, additional trophic interfaces enable the recovery of gaseous and dissolved emissions. Microalgae and other photosynthetic organisms can capture CO2 and NH3 from animal exhaust and convert them into valuable biomass [8]. In previous work, a photobioreactor was physically connected to a poultry housing unit to cultivate Arthrospira platensis (Spirulina) using exhaust air as a substrate, effectively purifying the air while producing high-yield algal biomass [11]. Similarly, duckweeds (family Lemnaceae) represent highly efficient nutrient sinks in aquatic environments. Among the fastest-growing angiosperms, they absorb nitrogen, phosphorus, and trace elements from residual process water, thereby improving water quality while generating protein-rich biomass suitable for feed or as a substrate for insect cultivation [12]. Insects such as black soldier fly larvae (BSFL) add a further trophic level by efficiently upcycling organic residues into protein-rich feed ingredients [13]. Together, these interconnected biological modules form the basis for multi-trophic circular production systems in which solid, liquid, and gaseous emissions are systematically valorised across trophic levels.
These elements converge in the vision of a highly integrated closed-loop CEA, schematically illustrated in Figure 1. Solid, liquid, and gaseous emissions from livestock are upgraded by microalgae, duckweed, and insects into high-value biomass that can be used as feed for aquaculture and terrestrial livestock. Aquaculture produces both food and residual products, which serve as fertilizer for vegetable farming under aquaponic principles. In turn, vegetables provide food for humans and residuals for animal feed. Powered by renewable energy, the recirculation system enables optimal reuse of nutrients and, above all, water, while minimizing environmentally harmful emissions.
Realizing such a complex system requires more than biological integration: it depends on advanced process control and comprehensive analytical technologies. Here, computer vision emerges as a key enabler. By providing non-invasive, real-time monitoring of plant and animal growth and health, computer vision supports machine-assisted control of closed-loop agriculture. Integrated with automation and digital manufacturing, these methods can stabilize community dynamics, anticipate perturbations, and increase the resilience of circular food production.
In this project report, we document the design, implementation, and integration of computer-vision-based monitoring systems within a multi-trophic circular indoor agriculture demonstrator. Building on the concept illustrated in Figure 1, we demonstrate the technical feasibility of coupling multi-trophic biomass production with advanced monitoring technologies under operational conditions. The presented system architecture illustrates how digital monitoring can be integrated into controlled-environment agriculture to support resource-efficient and resilient production processes.
The scope of this contribution is explicitly limited to controlled-environment agricultural systems, in which biological processes are operated within technologically regulated infrastructures. The framework presented here is not intended to replicate open-field agriculture but to enhance resilience, circularity, and resource efficiency in indoor or protected cultivation systems. Such controlled environments enable systematic integration of sensing, automation, and digital twin methodologies while decoupling production from external climatic variability.
The work presented in this manuscript is based on a demonstrator-scale circular indoor agriculture facility established to explore integrated closed-loop production concepts under operational conditions. Specific research components, particularly the integration of multiple trophic levels and the development of data-driven monitoring and fusion methodologies, were investigated within publicly funded research projects, such as UrbanAqua and Amigem [14,15]. These projects focus on implementing multi-trophic biomass production, combined with advanced digital monitoring and data integration strategies, in a demonstrator-scale facility.
The demonstrator facility is being developed as an iterative and staged innovation infrastructure, in which biological modules and digital monitoring components are progressively integrated, evaluated, and refined under operational conditions. Accordingly, this project report documents both the implemented system components realized to date and the architectural framework guiding their integration and further expansion.
Despite rapid advances in individual monitoring technologies, integrated frameworks that systematically couple computer vision, multimodal sensing, and digital twin methodologies across multiple biological modules within a unified circular production system remain underexplored. The central objective of the present project is, therefore, to implement and demonstrate such an integrated digital architecture, addressing the following goals:
(i)
deployment of multimodal monitoring across aquaculture, poultry, plant production, algae, and insect modules;
(ii)
establishment of a unified data infrastructure enabling hierarchical data fusion and digital twin integration;
(iii)
exploration of system-level interoperability, scalability pathways, and modular retrofit strategies within controlled-environment settings.

2. General Aspects of Computer Vision

Human vision is an adaptive, context-driven system that has evolved to interpret complex and dynamic environments. It continuously combines sensory input from the retina with higher-level processes such as memory, expectation, and context [16,17]. This interaction between bottom-up perception and top-down cognition enables humans to recognize objects even under challenging conditions, such as poor illumination, occlusion, or distortion. Human vision integrates multiple cues—color, motion, depth, and texture—to extract meaning from scenes and infer relationships between objects. It is inherently robust, flexible, and guided by attention and feedback loops that link perception to action [18].
Computer vision (CV), in contrast, is a data-driven technological counterpart designed to replicate selected aspects of human perception. CV systems acquire optical information using digital sensors and process these data using mathematical models and machine-learning algorithms, particularly convolutional neural networks and transformer architectures [19]. While such models can perform extremely well in tasks like object detection, classification, or segmentation, they still lack the general understanding and interpretive flexibility characteristic of biological vision [20]. Figure 2 illustrates this contrast: while human perception relies on contextual reasoning and prior experience, computer vision systems convert raw sensor data into statistical predictions, typically without awareness of broader environmental meaning.
Human vision is also highly context-sensitive. Scene interpretation influences object recognition, and vice versa [21]. Depth perception in humans results from integrating binocular disparity with monocular cues such as motion parallax, occlusion, and texture gradients [18]. Most CV systems, however, process each image independently, without linking it to semantic context or prior knowledge. This limitation can reduce generalization and reliability when environmental conditions differ from those seen during training [20]. Figure 3 summarizes common strategies for visual interpretation in computer vision. (a) Area-based methods, such as semantic segmentation, classify every pixel in an image to create spatial maps of relevant regions—for instance, distinguishing foliage from background. (b) Object-based methods detect individual items using bounding boxes or keypoints, allowing both object identification and structural localization. These complementary techniques underpin most agricultural applications of computer vision, from growth monitoring to disease detection in CEA settings.
Another central challenge lies in depth estimation and three-dimensional reconstruction. Humans perceive depth by integrating geometric and contextual cues, whereas CV systems typically rely on geometric triangulation. Figure 4 shows the principle of stereo vision: two cameras capture an object from slightly different viewpoints, and the disparity between corresponding pixels allows the estimation of its three-dimensional position. While this approach yields accurate geometric reconstructions, it lacks the contextual understanding that humans apply to infer depth under uncertain or incomplete visual conditions [17]. Recent deep-learning models, such as RAFT-Stereo and monocular depth estimation networks, have advanced geometric reasoning, yet complex or dynamic scenes remain difficult to interpret reliably.
To narrow the gap between human and artificial perception, current research increasingly integrates mechanisms inspired by cognitive processes. Recurrent and attention-based architectures emulate selective focus and iterative refinement during visual processing [22]. Multi-task learning allows simultaneous object detection, segmentation, and classification, resembling the integrative character of human perception [19]. Reinforcement learning introduces elements of active exploration, similar to human eye movement and attentional strategies [23]. Moreover, self-supervised and few-shot learning approaches help reduce dependence on large annotated datasets, improving adaptability to novel environments.
These developments move computer vision toward greater robustness and contextual awareness. Within sustainable food production systems, such capabilities enable non-invasive, real-time monitoring of biological processes and support adaptive control strategies in controlled agricultural environments. By enhancing perception, decision-making, and automation, computer vision forms a cornerstone of intelligent, resource-efficient circular agriculture.
Building on the closed-loop framework introduced in Figure 1, the following sections examine the implementation of computer vision across the individual trophic modules of the system. The focus lies on application-specific monitoring and control strategies within aquaculture, livestock production, plant cultivation, and biological upconversion.
We therefore present representative computer vision approaches for each major trophic level, beginning with aquaculture, followed by poultry production and vertical plant farming, and concluding with microalgae, duckweed, and insect bioconversion as key nutrient-recovery modules.

3. Computer Vision for Indoor Aquaculture

Aquaculture plays a central role in the closed-loop system shown in Figure 1, linking nutrient recovery, feed production, and protein generation. Monitoring fish health, growth, and behavior is essential for production efficiency, animal welfare, and environmental stability. Conventional monitoring relies on manual sampling, which may induce stress and limit temporal resolution. Recent advances in computer vision (CV) provide non-invasive alternatives for real-time monitoring in aquatic systems [24].
Deep-learning-based CV methods enable automated detection, tracking, and phenotypic analysis of fish under challenging conditions such as high stocking densities and dynamic lighting [25]. Core functional domains include detection, counting, behavior analysis, and health assessment [26]. Vision-based feeding analysis supports quantitative evaluation of feeding responses and feed management [27]. Three-dimensional (3D) stereo vision is particularly relevant for biomass estimation. By combining geometric reconstruction with learning-based segmentation, surface reconstruction and weight estimation can be achieved [28,29]. These methodological advances inform the implemented monitoring pipeline in the demonstrator.
Within our closed-loop demonstrator facility, CV is implemented in an integrated recirculating aquaculture module. The system combines stereo imaging, depth sensing, and water-quality analytics to support continuous biomass estimation and growth assessment under operational conditions. Commercial aquaculture environments remain challenging due to turbidity, occlusion, and illumination variability [24,25], and standardization across systems is limited [26]. In the present facility, CV-derived biomass metrics are synchronized with water-quality parameters within a shared data infrastructure to support operational monitoring.
Water-quality monitoring within the recirculating aquaculture system includes continuous measurement of pH, dissolved oxygen, and electrical conductivity (EC) at 1-min intervals using integrated in-line sensors. Additional chemical parameters, including nitrate (NO3), ammonium (NH4+), chlorine, and alkalinity, are assessed using colorimetric and photometric test systems (e.g., Hanna Instruments) at a sampling frequency of once per day. This interval reflects the comparatively slow response dynamics of the RAS configuration. Water exchange rates and feeding rates are documented and evaluated in relation to image-based growth curves derived from stereo vision analysis. Nutrient concentrations are recorded in parallel with stereo-derived biomass metrics to document growth dynamics under operational conditions. Automation of sampling procedures and the integration of ion-selective electrodes for extended temporal resolution are currently under development.
Within this setup, fish populations are continuously observed to estimate biomass and track growth under operational conditions. Figure 5 illustrates the implemented workflow for live fish weight estimation, from underwater image acquisition to length and weight computation.
Figure 5. System pipeline for live fish weight estimation in aquaculture. RGB images and depth maps are captured using a stereo camera. Synchronized streams are transmitted to an edge device for preprocessing and an inference server for main processing. A custom keypoint detection model identifies anatomical landmarks, which are combined with depth data for 3D length and weight estimation. Detailed inference and aggregation stages are illustrated in Figure 6 and Figure 7.
Figure 5. System pipeline for live fish weight estimation in aquaculture. RGB images and depth maps are captured using a stereo camera. Synchronized streams are transmitted to an edge device for preprocessing and an inference server for main processing. A custom keypoint detection model identifies anatomical landmarks, which are combined with depth data for 3D length and weight estimation. Detailed inference and aggregation stages are illustrated in Figure 6 and Figure 7.
Sustainability 18 02700 g005
Images are acquired using a consumer-grade stereo camera in a waterproof housing, with intrinsic camera parameters calibrated in submerged conditions using a chessboard target to correct for underwater refraction effects. Synchronized RGB and depth streams are transmitted via USB to a local edge device for preprocessing and subsequently via Ethernet to an inference server, where lengths and weights are estimated in real time, aggregated, and transferred to a cloud database.
Figure 6 provides a detailed view of the inference sequence used to generate validated length and weight estimates for every detection. Applying a two-stage inference pipeline, an object detection model first locates all visible fish using bounding boxes, followed by a keypoint detection model trained on a dataset spanning multiple growth stages that identifies anatomical landmarks, such as the nose, tail base, tail tip, and eye of each individual. Combining two-dimensional keypoints with corresponding depth data and camera parameters enables three-dimensional reconstruction and length estimation, and results are validated for detection confidence and geometric plausibility.
Detected individuals are assigned unique identifiers and tracked across frames, allowing the system to reconstruct multiple views of the same fish over time. Length estimates are averaged across these multiple views to mitigate minor keypoint localization errors, depth noise, and body curvature effects during swimming. Statistical outliers are automatically detected and removed before averaging, further improving robustness. Individual weight is then derived from length using an allometric growth formula. The validated measurements are subsequently aggregated to produce weight-distribution histograms that form the basis of the growth curves presented in Figure 8.
Figure 6. Inference pipeline for estimating fish weight distribution. Synchronized RGB and depth streams are processed through an object detection model, producing bounding boxes, followed by a keypoint detection model identifying anatomical landmarks for 3D reconstruction. Fish are tracked across frames, and validated length and weight estimates are aggregated into weight distributions.
Figure 6. Inference pipeline for estimating fish weight distribution. Synchronized RGB and depth streams are processed through an object detection model, producing bounding boxes, followed by a keypoint detection model identifying anatomical landmarks for 3D reconstruction. Fish are tracked across frames, and validated length and weight estimates are aggregated into weight distributions.
Sustainability 18 02700 g006
A representative example for validating the length-estimation pipeline on a single isolated fish is shown in Figure 7, demonstrating correct 3D reconstruction under idealized visibility conditions before applying the method at the population scale. The figure highlights the sequential reconstruction steps from RGB and depth acquisition to three-dimensional geometry extraction. The application of this approach to entire fish populations is illustrated in Figure 8, showing growth curves for three cultivation systems over a 20-week monitoring period.
Figure 7. Verification of length estimation for a single, isolated fish. The figure visualizes the processing pipeline used for length estimation. (a) Original RGB image with annotated key points and bounding box. (b) Corresponding depth map. (c) Reconstructed 3D coordinates of the fish. (d) Denoised depth values along the fish body in both 3D and 2D space.
Figure 7. Verification of length estimation for a single, isolated fish. The figure visualizes the processing pipeline used for length estimation. (a) Original RGB image with annotated key points and bounding box. (b) Corresponding depth map. (c) Reconstructed 3D coordinates of the fish. (d) Denoised depth values along the fish body in both 3D and 2D space.
Sustainability 18 02700 g007
Across all populations, standard deviations and residual errors increased with fish size and system density. This trend reflects the cumulative effects of higher feeding rates, reduced filtration efficiency, and overlapping fish, which together degrade image quality and depth accuracy. A mild systematic negative bias in length estimation is present throughout all cultivation stages. This bias originates partly from residual camera-calibration inaccuracies and partly from the geometric properties of keypoint-based length extraction: fish may appear curved during swimming, yielding shorter reconstructed lengths, whereas overestimation is geometrically impossible. Furthermore, keypoints are filtered based on depth consistency, preventing detections outside the body contour and thereby reinforcing this negative bias. Environmental influences such as biofilm accumulation on the camera lens and variable illumination conditions further affect length estimation accuracy. In later cultivation stages, these effects are compounded by increased turbidity and frequent occlusions, which introduce an additional sampling bias toward smaller and more visible individuals. As illustrated in Figure 8, from week 16 onward, not enough valid automatic measurements could be obtained for the RASF1 population because of excessive turbidity and occlusions. Because weight is derived via a non-linear allometric growth equation, small deviations in length propagate disproportionately into the final weight estimates, contributing to the divergence between automatic and manual measurements visible in the later cultivation stages of Figure 8. Despite these limitations, the combined use of visual and reference measurements provided a robust basis for modeling population-level growth dynamics and validating the performance of the inference system under real farming conditions.
Figure 8. Growth curves of three fish populations cultivated in individual recirculating aquaculture systems (RAS1–3) over a 20-week monitoring period. Automated computer-vision-based measurements (solid lines) are compared with periodic manual reference measurements (symbols).
Figure 8. Growth curves of three fish populations cultivated in individual recirculating aquaculture systems (RAS1–3) over a 20-week monitoring period. Automated computer-vision-based measurements (solid lines) are compared with periodic manual reference measurements (symbols).
Sustainability 18 02700 g008
The integration of computer vision into the aquaculture module demonstrates the feasibility of real-time monitoring within the circular demonstrator system. In the implemented setup, visual biomass metrics are synchronized with water-chemistry measurements and other analytical parameters to assess growth dynamics and system stability under operational conditions. Combined image-based and sensor-derived data are analyzed to identify deviations in feeding patterns, growth progression, and water-quality indicators, supporting data-informed management decisions within the facility.
At the same time, the findings highlight current limitations of optical monitoring. Increasing biomass density and turbidity, as observed in the RASF1 population, can compromise visibility and model accuracy. These challenges emphasize the importance of integrating complementary data sources such as dissolved oxygen, pH, redox potential, and nutrient concentrations. Together with computer vision, these measurements provide the basis for developing digital twins that continuously simulate and optimize system performance.
The experience gained from the aquaculture module illustrates how multimodal sensing, computer vision, and machine learning can jointly improve circularity and resilience in food production. Building on this foundation, the following section focuses on terrestrial livestock, using poultry as an example to demonstrate how vision-based monitoring supports welfare management, emission reduction, and nutrient recycling in a controlled environment. Although implemented in a recirculating aquaculture research environment, the described monitoring pipeline can be adapted to commercial RAS facilities by adjusting camera positioning and recalibrating species-specific growth models.

4. Computer Vision for Livestock Production (Poultry)

Computer vision (CV) has become an established component of precision livestock farming, enabling non-invasive monitoring in controlled production environments. In cattle, swine, and sheep systems, CV is widely applied for behavior recognition, lameness detection, and weight estimation under comparatively stable conditions [30,31]. Poultry farming, however, presents substantially greater technical challenges. Chickens are smaller, move more erratically, and are housed in large groups with high stocking densities and frequent occlusions, complicating reliable detection and tracking [32]. Variable lighting, airborne dust, and reflective surfaces further reduce image quality under commercial conditions.
Within our circular indoor agriculture demonstrator, we implemented a computer-vision-based poultry monitoring system for continuous behavioral observation under controlled housing conditions. The system integrates RGB image acquisition, object detection, multi-frame tracking, and behavioral feature extraction, as summarized in Figure 9.
Deep-learning-based detection models, including YOLO architectures, are commonly used for individual bird identification and activity analysis in dense flocks [33,34,35]. In the implemented system, a YOLO-based model detects individual birds in consecutive frames, enabling trajectory reconstruction. From these trajectories, intra-object metrics such as movement speed and total distance traveled are computed, as well as inter-object metrics including clustering behavior and nearest-neighbor distances.
A dedicated feeding-detection module identifies feeding events based on spatial proximity to defined feeding zones. Environmental parameters within the housing unit, including temperature, CO2, and ammonia concentrations [11], are recorded and evaluated alongside behavioral metrics. Figure 9 illustrates the implemented processing pipeline and representative RGB images recorded under operational farming conditions.
The YOLO-based detection model was trained and evaluated on manually annotated image datasets collected within the demonstrator facility. Individual birds were labeled using bounding boxes under varying lighting conditions and stocking densities to ensure robustness across operational scenarios. Data augmentation techniques, including horizontal flipping, brightness variation, and scaling, were applied to improve generalization performance. Object detection combined with multi-frame tracking enables reconstruction of movement trajectories over defined observation intervals. From these trajectories, baseline activity distributions are established under stable housing conditions. Deviations from these baseline patterns, such as sustained increases or decreases in locomotor activity or altered feeding frequency, are flagged for further inspection. In practice, identified deviations are evaluated in relation to recorded environmental parameters (temperature, CO2, ammonia) and management variables to support operational welfare assessment within the facility.
To interpret behavioral metrics within a welfare-oriented perspective, established assessment frameworks provide important reference points. The Animal Welfare Indicators (AWIN) framework is internationally recognized for structured evaluation of behavioral and environmental welfare criteria. AWIN assessments are typically performed manually through visual inspection. Several behavioral parameters extracted in the present monitoring system (e.g., activity levels, clustering behavior, feeding frequency) correspond to observable indicators defined within such welfare frameworks. Physiological indicators, including hormone-based stress markers, are rarely measured in commercial practice due to their invasive nature. Emerging research has investigated epigenetic biomarkers, such as DNA methylation signatures, as potential complementary indicators of long-term stress responses [36]. Although molecular indicators are not part of the current demonstrator implementation, they illustrate future pathways for linking automated behavioral monitoring with broader welfare analytics.
Within the framework of circular agriculture, poultry monitoring is also relevant for evaluating feed strategies and nutrient-cycle integration. Automated observation of feeding behavior supports assessment of alternative protein sources, such as insect- or algae-based feeds, by quantifying acceptance rates and behavioral responses. The combined analysis of behavioral and environmental parameters contributes to operational welfare monitoring within the integrated multi-trophic system.

5. Computer Vision for Vertical Plant Farming

CV is widely applied in vertical and greenhouse farming for non-invasive growth monitoring under controlled environmental conditions [37,38]. Image-based phenotyping supports quantification of plant morphology and development, enabling data-driven cultivation strategies. Recent advances include RGB-D-based biomass estimation and machine-learning-assisted segmentation approaches for growth analysis [39,40,41].
In our demonstrator system, CV is integrated into aquaponic plant production units as part of the multi-trophic circular architecture. Growth metrics derived from image data are evaluated alongside environmental and nutrient measurements to document cultivation performance under operational conditions. Advanced imaging modalities such as hyperspectral systems can provide detailed physiological information in specialized research environments [42]. However, the present demonstrator focuses on RGB-D-based morphological monitoring as a cost-efficient and operationally robust solution.
Figure 10 illustrates the implemented monitoring setup for leafy-green production across three aquaponic units. A mobile RGB-D camera mounted on a belt-driven linear rail traverses multiple cultivation beds and captures synchronized color and depth maps of the plant canopy. Machine-learning-based segmentation distinguishes plant regions from background and extracts quantitative parameters such as visible plant area and canopy height profiles.
Segmentation is performed using a lightweight statistical model based on a support vector machine (SVM). The model is trained using manually selected representative pixels from plant and background regions and can be adapted within minutes to new crop types or lighting conditions. Unlike deep-learning approaches, this method requires only limited training data and is suited to research environments with frequently changing cultivation parameters.
The derived growth metrics are integrated into the facility’s data infrastructure and evaluated in relation to light intensity, temperature, CO2 concentration, and nutrient composition. Concepts of data integration and digital-twin-based system modeling described in the recent literature [43,44] inform the architectural design of the monitoring framework, although full predictive simulation is not the focus of the current implementation. The temporal development of visible leaf area and average canopy height over a 16-day monitoring period is exemplified in Figure 10c. These measurements support comparative assessment of cultivation units and documentation of nutrient-management strategies within the aquaponic system.
The lightweight statistical segmentation approach demonstrates that effective growth monitoring can be achieved without computationally intensive deep-learning architectures or large annotated datasets. Fast-trainable models based on a limited number of manually selected examples provide a practical solution for research environments with frequently changing crop varieties and cultivation parameters. The modular imaging setup allows adjustment of camera positioning and rapid retraining of the segmentation model when cultivation conditions are modified within the facility.

6. Integration of Data Streams and System Architecture in the Aquaponic Modules

In aquaponic systems, plant growth, fish performance, and water chemistry are intrinsically coupled through shared nutrient and energy flows. Continuous monitoring of water-quality parameters such as pH, dissolved oxygen, ammonia, nitrate, temperature, and turbidity is an established practice in recirculating aquaculture systems and represents the current industry standard for operational control. Similarly, environmental parameters in plant production modules, including light intensity, CO2 concentration, humidity, and temperature, are routinely recorded in controlled-environment agriculture.
Within the demonstrator facility, these established sensor streams are integrated with image-derived metrics from computer-vision-based fish and plant monitoring. Stereo-derived fish biomass estimates and RGB-D-based plant growth metrics are time-synchronized with water-quality, nutrient, feeding, and environmental measurements in a shared database infrastructure. This unified data architecture enables cross-module evaluation of nutrient dynamics, growth progression, and system stability under operational conditions.
The structural design of this data framework is informed by digital-twin concepts described in the recent literature [45,46,47,48,49,50,51], in which physical production systems are mirrored by synchronized digital representations. In the present demonstrator, however, the implementation focuses on systematic data acquisition, harmonization, and comparative analysis rather than predictive simulation or automated control.
By incorporating computer vision into an already sensor-rich aquaponic infrastructure, the system extends conventional parameter monitoring to include direct biological performance indicators. This integration allows documented linkage between nutrient availability, environmental conditions, fish growth, and plant development across trophic levels. While full digital-twin simulation and model-based optimization remain future development steps, the realized data architecture establishes the structural foundation for such approaches.

7. Upconversion Modules for Nutrient and Biomass Recovery

Closed-loop agriculture (Figure 1) relies on biological upconversion processes that transform solid, liquid, and gaseous by-products into new sources of biomass and nutrients. These processes tighten nitrogen and phosphorus cycles, enhance carbon fixation, and minimize waste within the overall food-production system. In our framework, three complementary modules fulfill this role: microalgae that assimilate inorganic nutrients and capture carbon dioxide, duckweed that rapidly converts dissolved nitrogen into protein-rich biomass, and insects that upcycle organic residues into valuable feed components. Together, these organisms act as biological recyclers, converting residual material into new chemical energy stored in organic matter. This energy is retained in the form of proteins, lipids, and carbohydrates that re-enter the production cycle, rather than being released as electricity or heat.
In this context, the term energy recovery refers not to thermochemical energy generation, such as biogas or combustion, but to the retention of chemical energy bound in biomass for reuse within the trophic network. The focus is therefore on nutrient and biomass recovery: converting waste streams into biologically useful and energetically valuable material. To achieve this efficiently, each subsystem must be tightly monitored and controlled, linking visual, chemical, and physical data within a shared analytical and control framework.
As outlined in Chapter 6, recent work on data-driven aquaponics and controlled-environment systems has demonstrated that time-series modeling and IoT-based sensing can predict key water-quality variables, including dissolved oxygen, pH, ammonia, nitrate, temperature, and turbidity. These predictive tools enable proactive management actions such as adjusting aeration, recirculation, or nutrient dosing before imbalances propagate through the coupled fish–plant–microbe ecosystem [41,49,50]. Nutrient management models that integrate feature selection and machine learning, including XGBoost and random forest, identify the most influential predictors and automate supplementation, while plant spectral analytics directly quantify nitrogen, phosphorus, and potassium uptake [45,46]. Climate-adaptive simulations further improve stability and energy efficiency under variable environmental conditions [51].
Building on these foundations, the upconversion modules in our circular research facility combine computer vision with analytical sensing to manage biomass production in real time. Vision systems provide non-invasive assessments of growth, morphology, and system health, while digital-twin models fuse these observations with environmental and chemical data to regulate conversion rates, residence times, and harvest schedules. Following the established distinction between monitoring twins, which synchronize digital and physical systems for situational awareness, and predictive twins, which simulate future states and guide proactive decisions [47,48], each module contributes to a harmonized feedback architecture for sustainable, data-driven nutrient and biomass recovery.
The following sections present representative implementations for microalgae, duckweed, and insect bioconversion, highlighting their roles as interconnected trophic levels within the larger closed-loop food system.

7.1. Microalgae Cultivation

Microalgae serve as efficient converters of gaseous and dissolved emissions into high-value biomass. In controlled circular agriculture, they assimilate carbon dioxide and ammonia, thereby linking emission reduction with resource recovery. Their biomass retains chemical energy in the form of proteins, lipids, and carbohydrates that can be reintroduced into the production cycle as feed or fertilizer. Recent work highlights that integrating microalgae into recirculating aquaculture systems can further enhance nutrient recovery, oxygenation, and effluent valorization, providing environmentally and economically viable pathways for upgrading all major waste streams [52]. In our research facility, a cone-shaped helical photobioreactor was developed for the cultivation of Arthrospira sp. (Spirulina) directly coupled to the exhaust air of a poultry house [11]. The reactor design allows continuous gas exchange between the livestock unit and the algal culture, converting CO2 and NH3 from the exhaust into biomass while purifying the air.
Computer vision supports the process by providing non-invasive, real-time information on culture morphology and growth dynamics (Figure 11). Microscopic imaging is used to distinguish helical and straight Arthrospira filaments and to identify co-occurring microorganisms such as ciliates or diatoms. Beyond their implications for harvesting, the relative abundance of the two Arthrospira phenotypes can serve as an indicator of physiological status, as a predominance of straight filaments has been associated with environmental stress and altered metabolic activity [53].
To illustrate this relationship, Figure 11 presents an example workflow in which microscopic images are segmented into “spiral” and “straight” Arthrospira classes. The relative abundances of these morphological types are tracked over time to identify shifts in filament structure that may indicate physiological stress or altered metabolic activity. Automated classification thereby supports continuous process monitoring within the integrated production system.
Automated image analysis quantifies filament morphology, aggregation patterns, and the presence of co-occurring microorganisms, complementing routine measurements such as optical density and pH. Variations in the relative proportions of straight and helical filaments are documented over time and evaluated in relation to operational parameters of the photobioreactor, including gas supply, illumination, and mixing conditions. These measurements are stored within the facility’s data infrastructure, where visual and chemical data streams are synchronized for comparative analysis.
Within this integrated monitoring framework, the microalgae module functions as a carbon sink and nutrient recycler within the circular system. The produced Arthrospira biomass captures chemical energy from exhaust gases and provides a protein-rich feed resource, contributing to nutrient recirculation within the demonstrator facility.

7.2. Duckweed Systems

Duckweeds (family Lemnaceae) are among the fastest-growing angiosperms and play an important role in nutrient recovery from water streams [12]. Due to their rapid clonal propagation and high protein content, they represent an efficient alternative biomass source for feed and fertilizer applications. When integrated into circular agricultural systems, duckweeds absorb nitrogen, phosphorus, and trace elements from residual process water, thereby improving water quality while generating protein-rich biomass suitable for feed or as substrate for insect cultivation [12].
Automated monitoring of duckweed growth is essential for stable operation and continuous process control. Manual sampling is labor-intensive and unsuitable for high-frequency management. Recent studies have demonstrated the feasibility of image-based growth quantification and automated phenotyping using microscopy-based imaging systems, deep-learning segmentation (e.g., StarDist), and modular laboratory automation platforms [54,55,56]. Modular vertical farming concepts with automated nutrient dosing and recirculation further highlight the scalability of duckweed biomass production [12].
Building on these developments, our facility integrates duckweed cultivation into the closed-loop production system as a nutrient sink downstream of aquaculture and livestock modules. An automated monitoring setup combines computer vision with controlled environmental parameters to stabilize biomass output and support continuous water-quality management.
The monitoring workflow is shown in Figure 12, which illustrates a lightweight CV pipeline designed for robust daily operation. The system uses an RGB camera mounted on a vertical, spindle-driven axis to capture angled images of the cultivation trays, with the capture angle chosen to maximize the visible water surface while minimizing the vertical spacing between shelves. Captured images are orthorectified to ensure consistent spatial scaling before further analysis. Instead of pixel-accurate deep-learning segmentation, the system employs a lightweight statistical regression model trained on a limited set of biomass-derived surface-coverage reference values. Because the model operates on global color and texture descriptors rather than pixel-level annotations, it requires minimal manual labeling and can be retrained rapidly to accommodate different Lemna species or variable illumination conditions.
Surface-coverage data are continuously analyzed, triggering automated overflow harvesting once a defined threshold is exceeded. By linking visual monitoring, water analytics, and automated control, the duckweed module functions as an active interface between aquatic and terrestrial production lines. It converts dissolved nutrients into plant biomass, stabilizes water quality for recirculation, and provides a renewable protein resource within the integrated circular system.

7.3. Insect Bioconversion

Insect bioconversion represents a crucial trophic component within circular agricultural systems, transforming organic side streams into protein-rich biomass suitable for animal feed or aquaculture [57]. Among the species used for this purpose, the black soldier fly (Hermetia illucens, BSF) has gained particular attention for its efficiency in converting diverse waste substrates into valuable nutrients and for its ability to close nutrient loops between livestock, aquaculture, and plant production.
Despite this potential, large-scale insect farming still faces challenges in process control, labor efficiency, and product standardization. Manual monitoring of larval density, developmental stages, or substrate degradation is time-consuming and error-prone. Computer vision and deep learning offer non-invasive tools for automation and continuous monitoring, enabling the precise control of key production parameters [57].
Recent advances demonstrate the applicability of deep learning to insect production monitoring. Convolutional neural networks combined with optical flow analysis enable reliable classification of BSF developmental stages under practical conditions [58]. Detection and regression networks such as YOLOv8 and ResNet have shown strong agreement between estimated and measured larval weights, supporting automated trait-based management [59]. Experiments with macronutrient-balanced artificial substrates revealed the importance of hydration and protein content for efficient bioconversion [60]. Machine-learning models such as XGBoost have further improved weight-gain predictions and enabled the optimization of mixed organic waste diets, significantly enhancing feed conversion efficiency and biomass yield [61].
In our integrated research and development facility, computer vision forms the basis of a non-invasive monitoring strategy for BSF bioconversion. RGB image sequences are analyzed to extract larval density, size distribution, color variation, and motion activity as proxies for developmental stage and substrate degradation. These visual features are fused with environmental measurements, such as temperature, humidity, and CO2 concentration, to generate a multimodal data stream for predictive modeling and process control. Figure 13 illustrates this workflow, showing how BSFL are visually monitored in the context of their use as a feed supplement within the circular production system. The derived population indicators are integrated with environmental sensor data to enable adaptive decisions regarding substrate dosing, moisture management, and harvest timing within the insect bioconversion module.
By combining vision-based phenotyping with environmental sensing and machine learning, the insect module functions not only as a nutrient recycler but also as a data-rich regulatory component within the overall circular system. Larval growth patterns and activity profiles inform dynamic substrate allocation, moisture control strategies, and optimized harvest scheduling, while simultaneously contributing to the balancing of nutrient flows between fish, poultry, and plant modules. While the implementation is still evolving, these methods highlight how insect bioconversion can become an analytically transparent and optimizable element of future closed-loop food production systems [57,60].

8. System Integration and Data Fusion

The realization of a closed-loop agricultural system requires not only the coupling of biological modules but also the seamless integration of data flows across all components. Each subsystem (fish, poultry, plants, algae, and insects) produces heterogeneous data, including RGB images and depth maps, spectral signatures, water chemistry, temperature, humidity, and gas concentrations. To transform these multimodal data streams into actionable information, they must be captured, synchronized, and analyzed within a unified digital infrastructure [47,48].
In our facility, sensor and imaging data from all modules are collected via local edge devices and transmitted to a central inference server for preprocessing, fusion, and analysis. The architecture follows a modular design: each biological module contributes structured and unstructured data to a common database, enabling joint interpretation of biological, chemical, and physical variables. This integration forms the backbone for a digital representation of the entire system, or digital twin, which connects real-time measurements with simulation and prediction models [44].
Machine-learning methods play a central role in this architecture. Time-series models such as recurrent neural networks (RNNs), long short-term memory (LSTM), and gated recurrent unit (GRU) networks are particularly well suited for modeling complex temporal dependencies, for example, in water-quality dynamics or nutrient cycling [49,50]. By combining vision-based observations with sensor-based measurements, predictive models can identify anomalies, anticipate shifts in system equilibrium, and support adaptive control decisions. Similar data-driven optimization approaches have already demonstrated strong potential in aquaponics, where nutrient supply and water quality are jointly optimized through continuous learning loops [45].
A key objective of the ongoing implementation is to enable data fusion across multiple layers of information. High-level features extracted from computer vision models, such as biomass growth rates or color indices, are merged with low-level environmental variables, such as pH, ammonia, and dissolved oxygen. This combined dataset allows the inference engine to estimate latent variables that are otherwise inaccessible to direct measurement, such as metabolic activity or system-level nutrient balance. The integration also supports higher-level decision-making processes, including the coordination of feeding, lighting, and harvesting cycles across modules [43].
In the present architecture, multimodal data fusion follows a hierarchical and robustness-oriented design. Heterogeneous datasets, such as image-derived features, spectral information, and physicochemical sensor measurements, are first processed using modality-specific preprocessing pipelines and then transformed into structured feature representations. These features are temporally synchronized via a unified timestamp framework and normalized to comparable scales.
Fusion is performed through feature-level and representation-level integration within the central inference server. In neural multimodal architectures, heterogeneous datasets can be fused by transforming modality-specific inputs into shared latent representations through specialized subnetworks. These representations can subsequently be integrated at the feature, representation, or decision level, depending on the desired balance between robustness and interpretability. This multimodal learning strategy enables flexible late-stage fusion while preserving the interpretability of individual modalities.
To enhance robustness against sensor disturbances and incomplete data, the framework incorporates principles of disturbance-aware training and multisensor redundancy. Building on prior work in robust multisensor system design [62] and adaptive training strategies for disturbance-resilient object detection [63], the approach integrates both functional sensor data and simulated perturbations during model development. Such disturbance-aware training improves resilience against noise, occlusion, turbidity, illumination changes, and partial sensor degradation. These robustness-oriented fusion strategies are essential for reliable operation in biologically dynamic agricultural environments.
Although the complete integration of all modules is still under development, the conceptual framework and technical implementation are largely established. The current focus is on ensuring interoperability between data sources, standardizing interfaces, and refining automated feedback loops for process control. Once fully operational, this infrastructure is intended to serve as a scalable reference model for digitalized circular agriculture, demonstrating how multimodal data analytics and digital twins can translate biological complexity into measurable, controllable processes.
While certain hardware configurations described here (e.g., specific camera placements or tank geometries) reflect the infrastructure of our research facility, the underlying data architecture, inference pipelines, and digital twin framework are modular by design. The separation of edge acquisition, centralized inference, and database integration allows adaptation to different farm sizes, climatic conditions, and production focuses. Core components such as multimodal data fusion, probabilistic modeling, and cross-module feedback control can be implemented independently of the specific biological modules described here.

9. Discussion and Outlook

The transition toward fully integrated, circular agricultural systems relies not only on technological innovation but also on our ability to manage the biological and operational uncertainties that arise from complex, interlinked trophic networks. While data-driven control and digital twins provide unprecedented insight and automation potential, they must be reconciled with the intrinsic variability of living systems and the need for adaptive configuration across production modules.

9.1. Biological Variability and Uncertainty Management

Growth dynamics in biological systems exhibit substantial variability, both within and across trophic levels. Factors such as microbial composition, genetic diversity, temperature fluctuations, and stochastic feeding behavior can result in large deviations from predicted growth trajectories. Classical experimental strategies based on replication and controlled variance reduction are difficult to apply in closed-loop systems, where long generation times and interdependencies between modules limit the feasibility of replicated trials.
To address these challenges, data science must explicitly model uncertainty rather than treat it as experimental noise. Probabilistic and Bayesian approaches can estimate confidence intervals for key parameters such as growth rates, biomass conversion efficiency, or nutrient uptake. Ensemble methods and Gaussian process regression can propagate these uncertainties through coupled models, allowing the system to operate with quantified rather than assumed stability. In practice, such methods enable robust control strategies that tolerate deviations while maintaining the balance between energy input, nutrient recycling, and biomass output. Over time, incremental learning and continuous data acquisition are expected to narrow uncertainty margins, improving predictive accuracy even in the absence of extensive replication.

9.2. Adaptive Configuration of Circular Modules

The modular structure of the circular system allows flexible coupling between trophic levels, for example, by operating microalgae and duckweed as a hybrid aquaponic system or by directing insect biomass as feed for fish or poultry. Each configuration alters nutrient flow, microbial composition, and overall system efficiency. Identifying the optimal setup under changing environmental and production conditions, therefore, represents a multidimensional optimization problem.
Data-driven optimization frameworks can help explore this design space. Reinforcement learning, multi-objective optimization, and digital twin simulations can test hypothetical configurations virtually before implementation. By simulating feedback loops between modules, such as nutrient recycling rates, water quality, or energy demand, these systems can identify operating points that maximize overall resource efficiency while maintaining ecological balance. Coupled with real-time data fusion from all modules, adaptive configuration becomes a continuous process rather than a fixed design choice, allowing the system to evolve dynamically in response to external perturbations or production goals.

9.3. Future Perspectives

The integration of computer vision, sensor networks, and machine learning across multiple trophic levels marks a paradigm shift toward autonomous and self-regulating agricultural systems. As these technologies mature, their success will increasingly depend on interoperability standards, transparent data governance, and accessible system architectures that allow scaling from laboratory setups to industrial applications.
Beyond technical feasibility, broader questions emerge concerning sustainability assessment, ethical considerations in automation, and the socio-economic integration of such systems into local food networks. Quantifying the true ecological benefits of circular agriculture will require harmonized life-cycle assessments that integrate environmental, economic, and social indicators. In parallel, participatory design approaches can ensure that automation serves both environmental and human well-being, aligning technological progress with public trust and policy frameworks.
From a system-design and commercial perspective, the transferability of the presented framework depends on modular implementation rather than full-scale replication of the entire facility. The pilot installation described here serves as a demonstrator for the gradual retrofitting of existing agricultural operations. Individual modules, such as computer-vision-supported aquaculture monitoring, poultry behavior analytics, or plant-growth quantification, can be integrated into existing infrastructure without requiring a complete system replacement. This modular retrofit approach lowers entry barriers for small- and medium-scale farms and enables incremental digitalization aligned with available resources and investment capacities.
While the fully integrated demonstrator represents an advanced configuration, its architecture is designed to allow staged deployment. Farms may adopt selected monitoring and data-fusion components first and progressively expand toward more comprehensive circular integration as economic viability and operational expertise increase. Such scalable implementation pathways support practical commercialization and reduce financial risk, thereby strengthening the real-world applicability of digitally assisted circular agriculture.
From an environmental and systemic sustainability perspective, the framework enables resource-efficient food production through integrated nutrient recycling, water reuse, and data-driven process optimization. Its modular structure facilitates implementation in urban agriculture initiatives, research infrastructures, and regions facing climatic constraints where spatial efficiency and emission reduction are critical. Controlled-environment approaches are inherently designed to decouple agricultural production from external climatic variability, making them particularly suitable for regions exposed to heat stress, water scarcity, or unstable seasonal patterns.
At the same time, broader deployment requires careful consideration of practical constraints, including initial hardware investment costs, the need for technical expertise in data integration, and the challenge of harmonizing heterogeneous data streams across modules. Addressing these factors will be essential for translating experimental demonstrators into widely adopted circular production systems.
From an economic and societal perspective, the presented framework provides a pathway for the structural transformation of existing agricultural infrastructure toward digitally supported circular production. By enabling the retrofitting of conventional farms rather than requiring complete system replacement, the approach supports long-term modernization while preserving existing assets. Process transparency, automated monitoring, and predictive analytics contribute to yield stabilization, reduce feed and nutrient losses, and lower operational risks associated with biological variability. In aquaculture modules, recirculating systems significantly reduce water consumption compared to flow-through systems, while controlled indoor environments limit evaporation losses and improve overall water-use efficiency.
The increasing availability of renewable energy sources further strengthens the economic viability of controlled-environment systems, particularly in regions with high solar irradiation, where photovoltaic integration can offset operational energy demand. Beyond direct economic effects, the framework contributes to societal resilience by supporting regional food production, reducing dependency on long supply chains, and enabling climate-adaptive agriculture in water-scarce or temperature-sensitive regions. In this way, digitalized circular agriculture can combine environmental responsibility with economic competitiveness and social stability.
In conclusion, this project demonstrates the implementation of an integrated multi-trophic circular indoor agriculture facility in which computer-vision-based monitoring and multimodal sensing are systematically deployed across aquaculture, poultry, plant, and microalgae modules. The realized data infrastructure enables synchronized acquisition of biological, chemical, and environmental parameters under operational conditions, providing a documented basis for analyzing nutrient dynamics, growth performance, and system stability.
The demonstrator shows how established water and environmental sensor systems can be complemented by computer vision to directly quantify biological performance across trophic levels. While full predictive simulation and automated optimization remain future development steps, the implemented monitoring architecture establishes a scalable framework for data-driven evaluation of circular production systems under controlled-environment conditions.

Author Contributions

Conceptualization, F.W., T.G., M.R., and C.M.N.; Data curation, F.W., M.R., and M.K.; Formal analysis, F.W., M.K., and M.R.; Investigation, F.W. and T.G.; Methodology, F.W., M.R., and M.K.; Project administration, K.M. and C.M.N.; Supervision, K.M. and C.M.N.; Visualization, T.G. and F.W.; Writing—original draft, F.W., T.G., M.R., and C.M.N.; Writing—review and editing, F.W., T.G., and C.M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported through the Helmholtz program “Materials Systems Engineering” under the topic “Adaptive and Bioinstructive Materials Systems”, and BMBF projects 031B0915U1 UrbanAqua and 031B0915S Amigem under the umbrella “Innovationsraum Bioökonomie auf Marinen Standorten”. Open Access funding enabled and organized by Projekt DEAL.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available upon reasonable request from the corresponding author.

Acknowledgments

The authors gratefully acknowledge Lennard Buchholz and Marco Diederichs (Acheron GmbH) for their valuable contributions to the development of the microalgae and poultry modules at the Acheron facility. We thank Anja Noke and Luca Gerdes (Hochschule Bremen) for valuable discussions and insights on duckweed cultivation.

Conflicts of Interest

F.W., T.G., K.M., and C.M.N. are stakeholders of Acheron GmbH and declare competing interests. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Lampridi, M.G.; Sørensen, C.G.; Bochtis, D. Agricultural Sustainability: A Review of Concepts and Methods. Sustainability 2019, 11, 5120. [Google Scholar] [CrossRef]
  2. Asseng, S.; Guarin, J.R.; Raman, M.; Monje, O.; Kiss, G.; Despommier, D.D.; Meggers, F.M.; Gauthier, P.P.G. Wheat yield potential in controlled-environment vertical farms. Proc. Natl. Acad. Sci. USA 2020, 117, 19131. [Google Scholar] [CrossRef] [PubMed]
  3. Dsouza, A.; Price, G.W.; Dixon, M.; Graham, T. A Conceptual Framework for Incorporation of Composting in Closed-Loop Urban Controlled Environment Agriculture. Sustainability 2021, 13, 2471. [Google Scholar] [CrossRef]
  4. Marvin, S.; Rickards, L.; Rutherford, J. The urbanisation of controlled environment agriculture: Why does it matter for urban studies? Urban Stud. 2023, 61, 1430–1450. [Google Scholar] [CrossRef]
  5. Engler, N.; Krarti, M. Review of energy efficiency in controlled environment agriculture. Renew. Sustain. Energy Rev. 2021, 141, 110786. [Google Scholar] [CrossRef]
  6. Mills, E. The emergence of indoor agriculture as a driver of global energy demand. npj Sustain. Agric. 2025, 3, 52. [Google Scholar] [CrossRef]
  7. Wang, L.; Norford, L.; Arkin, A.; Niu, G.; de Souza, S.V.; Zahid, A.; Shih, P.M.; Piette, M.A.; Ganapathysubramanian, B. Finding sustainable, resilient, and scalable solutions for future indoor agriculture. npj Sci. Plants 2025, 1, 5. [Google Scholar] [CrossRef]
  8. Glockow, T.; Kaster, A.-K.; Rabe, K.S.; Niemeyer, C.M. Sustainable agriculture: Leveraging microorganisms for a circular economy. Appl. Microbiol. Biotechnol. 2024, 108, 452. [Google Scholar] [CrossRef]
  9. Wirza, R.; Nazir, S. Urban aquaponics farming and cities- a systematic literature review. Rev. Environ. Health 2021, 36, 47–61. [Google Scholar] [CrossRef]
  10. Kushwaha, J.; Priyadarsini, M.; Rani, J.; Pandey, K.P.; Dhoble, A.S. Aquaponic trends, configurations, operational parameters, and microbial dynamics: A concise review. Environ. Dev. Sustain. 2025, 27, 213–246. [Google Scholar] [CrossRef]
  11. Glockow, T.; Velaz Martín, M.; Meisch, L.; Kapieske, D.; Meissner, K.; Correa Cassal, M.; Kaster, A.-K.; Rabe, K.S.; Niemeyer, C.M. A photobioreactor for production of algae biomass from gaseous emissions of an animal house. Appl. Microbiol. Biotechnol. 2023, 107, 7673–7684. [Google Scholar] [CrossRef] [PubMed]
  12. Petersen, F.; Demann, J.; von Salzen, J.; Olfs, H.-W.; Westendarp, H.; Wolf, P.; Appenroth, K.-J.; Ulbrich, A. Re-circulating indoor vertical farm: Technicalities of an automated duckweed biomass production system and protein feed product quality evaluation. J. Clean. Prod. 2022, 380, 134894. [Google Scholar] [CrossRef]
  13. Hawkey, K.J.; Lopez-Viso, C.; Brameld, J.M.; Parr, T.; Salter, A.M. Insects: A Potential Source of Protein and Other Nutrients for Feed and Food. Annu. Rev. Anim. Biosci. 2021, 9, 333–354. [Google Scholar] [CrossRef] [PubMed]
  14. Publicly Accessible Project Websites. Amigem. Available online: https://blaue-biooekonomie.de/projekte/amigem (accessed on 26 February 2026).
  15. Publicly Accessible Project Websites. UrbanAqua. Available online: https://blaue-biooekonomie.de/en/projects/urbanaqua (accessed on 26 February 2026).
  16. Yuille, A.; Kersten, D. Vision as Bayesian inference: Analysis by synthesis? Trends Cogn. Sci. 2006, 10, 301–308. [Google Scholar] [CrossRef]
  17. Funke, C.M.; Borowski, J.; Stosio, K.; Brendel, W.; Wallis, T.S.A.; Bethge, M. Five points to check when comparing visual perception in humans and machines. J. Vis. 2021, 21, 16. [Google Scholar] [CrossRef]
  18. Backus, B.T.; Banks, M.S.; van Ee, R.; Crowell, J.A. Horizontal and vertical disparity, eye position, and stereoscopic slant perception. Vis. Res. 1999, 39, 1143–1170. [Google Scholar] [CrossRef]
  19. Kubilius, J.; Schrimpf, M.; Kar, K.; Hong, H.; Majaj, N.J.; Rajalingham, R.; Issa, E.B.; Bashivan, P.; Prescott-Roy, J.; Schmidt, K.; et al. Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs. arXiv 2019, arXiv:1909.06161v2. [Google Scholar]
  20. Geirhos, R.; Jacobsen, J.-H.; Michaelis, C.; Zemel, R.; Brendel, W.; Bethge, M.; Wichmann, F.A. Shortcut learning in deep neural networks. Nat. Mach. Intell. 2020, 2, 665–673. [Google Scholar] [CrossRef]
  21. Oliva, A.; Torralba, A. The role of context in object recognition. Trends Cogn. Sci. 2007, 11, 520–527. [Google Scholar] [CrossRef]
  22. Kar, K.; Kubilius, J.; Schmidt, K.; Issa, E.B.; DiCarlo, J.J. Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nat. Neurosci. 2019, 22, 974–983. [Google Scholar] [CrossRef]
  23. Mathe, S.; Pirinen, A.; Sminchisescu, C. Reinforcement Learning for Visual Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2894–2902. [Google Scholar]
  24. Fitzgerald, A.; Ioannou, C.C.; Consuegra, S.; Dowsey, A.; Garcia de Leaniz, C. Machine Vision Applications for Welfare Monitoring in Aquaculture: Challenges and Opportunities. Aquacult. Fish Fish. 2025, 5, e70036. [Google Scholar] [CrossRef]
  25. Wu, A.-Q.; Li, K.-L.; Song, Z.-Y.; Lou, X.; Hu, P.; Yang, W.; Wang, R.-F. Deep Learning for Sustainable Aquaculture: Opportunities and Challenges. Sustainability 2025, 17, 5084. [Google Scholar] [CrossRef]
  26. Cui, M.; Liu, X.; Liu, H.; Zhao, J.; Li, D.; Wang, W. Fish Tracking, Counting, and Behaviour Analysis in Digital Aquaculture: A Comprehensive Survey. Rev. Aquac. 2025, 17, e13001. [Google Scholar] [CrossRef]
  27. Xiao, Y.; Huang, L.; Zhang, S.; Bi, C.; You, X.; He, S.; Guan, J. Feeding behavior quantification and recognition for intelligent fish farming application: A review. Appl. Anim. Behav. Sci. 2025, 285, 106588. [Google Scholar] [CrossRef]
  28. Zhao, Y.; Qin, H.; Xu, L.; Yu, H.; Chen, Y. A review of deep learning-based stereo vision techniques for phenotype feature and behavioral analysis of fish in aquaculture. Artif. Intell. Rev. 2024, 58, 7. [Google Scholar] [CrossRef]
  29. Wang, G.; Yu, J.; Liu, S.; Xu, W.; Li, X.; Hao, Y.; Li, D. Automatic fish weight estimation and 3D surface reconstruction with a lightweight instance segmentation model. Expert Syst. Appl. 2025, 288, 128275. [Google Scholar] [CrossRef]
  30. Li, G.; Huang, Y.; Chen, Z.; Chesser, G.D.; Purswell, J.L.; Linhoss, J.; Zhao, Y. Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review. Sensors 2021, 21, 1492. [Google Scholar] [CrossRef]
  31. Borges Oliveira, D.A.; Ribeiro Pereira, L.G.; Bresolin, T.; Pontes Ferreira, R.E.; Reboucas Dorea, J.R. A review of deep learning algorithms for computer vision systems in livestock. Livest. Sci. 2021, 253, 104700. [Google Scholar] [CrossRef]
  32. Chen, C.; Zhu, W.; Norton, T. Behaviour recognition of pigs and cattle: Journey from computer vision to deep learning. Comput. Electron. Agric. 2021, 187, 106255. [Google Scholar] [CrossRef]
  33. Li, N.; Ren, Z.; Li, D.; Zeng, L. Review: Automated techniques for monitoring the behaviour and welfare of broilers and laying hens: Towards the goal of precision livestock farming. Animal 2020, 14, 617–625. [Google Scholar] [CrossRef]
  34. Okinda, C.; Nyalala, I.; Korohou, T.; Okinda, C.; Wang, J.; Achieng, T.; Wamalwa, P.; Mang, T.; Shen, M. A review on computer vision systems in monitoring of poultry: A welfare perspective. Artif. Intell. Agric. 2020, 4, 184–208. [Google Scholar] [CrossRef]
  35. Cruz, E.; Hidalgo-Rodriguez, M.; Acosta-Reyes, A.M.; Rangel, J.C.; Boniche, K. AI-Based Monitoring for Enhanced Poultry Flock Management. Agriculture 2024, 14, 2187. [Google Scholar] [CrossRef]
  36. Whelan, R.; Tönges, S.; Böhl, F.; Lyko, F. Epigenetic biomarkers for animal welfare monitoring. Front. Vet. Sci. 2023, 9, 1107843. [Google Scholar] [CrossRef] [PubMed]
  37. Ojo, M.O.; Zahid, A. Deep Learning in Controlled Environment Agriculture: A Review of Recent Advancements, Challenges and Prospects. Sensors 2022, 22, 7965. [Google Scholar] [CrossRef] [PubMed]
  38. Ghazal, S.; Munir, A.; Qureshi, W.S. Computer vision in smart agriculture and precision farming: Techniques and applications. Artif. Intell. Agric. 2024, 13, 64–83. [Google Scholar] [CrossRef]
  39. Meraj, T.; Sharif, M.I.; Raza, M.; Alabrah, A.; Kadry, S.; Gandomi, A.H. Computer vision-based plants phenotyping: A comprehensive survey. iScience 2024, 27, 108709. [Google Scholar] [CrossRef]
  40. Rezaei, M.; Diepeveen, D.; Laga, H.; Jones, M.G.K.; Sohel, F. Plant disease recognition in a low data scenario using few-shot learning. Comput. Electron. Agric. 2024, 219, 108812. [Google Scholar] [CrossRef]
  41. Chandramenon, P.; Aggoun, A.; Tchuenbou-Magaia, F. Smart approaches to Aquaponics 4.0 with focus on water quality − Comprehensive review. Comput. Electron. Agric. 2024, 225, 109256. [Google Scholar] [CrossRef]
  42. Ram, B.G.; Oduor, P.; Igathinathane, C.; Howatt, K.; Sun, X. A systematic review of hyperspectral imaging in precision agriculture: Analysis of its current state and future prospects. Comput. Electron. Agric. 2024, 222, 109037. [Google Scholar] [CrossRef]
  43. Hoseinzadeh, S.; Astiaso Garcia, D. Ai-driven innovations in greenhouse agriculture: Reanalysis of sustainability and energy efficiency impacts. Energy Convers. Manag. X 2024, 24, 100701. [Google Scholar] [CrossRef]
  44. Zhang, R.; Zhu, H.; Chang, Q.; Mao, Q. A Comprehensive Review of Digital Twins Technology in Agriculture. Agriculture 2025, 15, 903. [Google Scholar] [CrossRef]
  45. Dhal, S.B.; Jungbluth, K.; Lin, R.; Sabahi, S.P.; Bagavathiannan, M.; Braga-Neto, U.; Kalafatis, S. A Machine-Learning-Based IoT System for Optimizing Nutrient Supply in Commercial Aquaponic Operations. Sensors 2022, 22, 3510. [Google Scholar] [CrossRef] [PubMed]
  46. Taha, M.F.; ElManawy, A.I.; Alshallash, K.S.; ElMasry, G.; Alharbi, K.; Zhou, L.; Liang, N.; Qiu, Z. Using Machine Learning for Nutrient Content Detection of Aquaponics-Grown Plants Based on Spectral Data. Sustainability 2022, 14, 12318. [Google Scholar] [CrossRef]
  47. Subeesh, A.; Chauhan, N. Agricultural digital twin for smart farming: A review. Green Technol. Sustain. 2025, 4, 100299. [Google Scholar] [CrossRef]
  48. Ahsen, R.; Di Bitonto, P.; Novielli, P.; Magarelli, M.; Romano, D.; Diacono, D.; Monaco, A.; Amoroso, N.; Bellotti, R.; Tangaro, S. Harnessing Digital Twins for Sustainable Agricultural Water Management: A Systematic Review. Appl. Sci. 2025, 15, 4228. [Google Scholar] [CrossRef]
  49. Airlangga, G.; Nugroho, O.I.A.; Sugianto, L.F. Deep Learning Approaches for Water Quality Prediction in Aquaponics Systems: A Comparative Study of Recurrent and Feedforward Architectures. Bul. Ilm. Sarj. Tek. Elektro 2025, 7, 01–08. [Google Scholar] [CrossRef]
  50. Sundararajan, S.C.M.; Shankar, Y.B.; Selvam, S.P.; Manogaran, N.; Seerangan, K.; Natesan, D.; Selvarajan, S. IoT-based prediction model for aquaponic fish pond water quality using multiscale feature fusion with convolutional autoencoder and GRU networks. Sci. Rep. 2025, 15, 1925. [Google Scholar] [CrossRef]
  51. Sacile, F.; Soussi, A.; Gallo, M. Modeling and Control of a Sustainable Aquaponic System: Case Study in Saudi Arabia. In Proceedings of the 2025 20th Annual System of Systems Engineering Conference (SoSE), Tirana, Albania, 8–11 June 2025; pp. 1–7. [Google Scholar]
  52. Ende, S.; Henjes, J.; Spiller, M.; Elshobary, M.; Hanelt, D.; Abomohra, A. Recent advances in recirculating aquaculture systems and role of microalgae to close system loop. Bioresour. Technol. 2024, 407, 131107. [Google Scholar] [CrossRef]
  53. Hongsthong, A.; Sirijuntarut, M.; Prommeenate, P.; Thammathorn, S.; Bunnag, B.; Cheevadhanarak, S.; Tanticharoen, M. Revealing differentially expressed proteins in two morphological forms of Spirulina platensis by proteomic analysis. Mol. Biotechnol. 2007, 36, 123–130. [Google Scholar] [CrossRef]
  54. Cox, K.L., Jr.; Manchego, J.; Meyers, B.C.; Czymmek, K.J.; Harkess, A. Automated imaging of duckweed growth and development. Plant Direct 2022, 6, e439. [Google Scholar] [CrossRef]
  55. Kurnia, K.A.; Lin, Y.-T.; Farhan, A.; Malhotra, N.; Luong, C.T.; Hung, C.-H.; Roldan, M.J.M.; Tsao, C.-C.; Cheng, T.-S.; Hsiao, C.-D. Deep Learning-Based Automatic Duckweed Counting Using StarDist and Its Application on Measuring Growth Inhibition Potential of Rare Earth Elements as Contaminants of Emerging Concerns. Toxics 2023, 11, 680. [Google Scholar] [CrossRef]
  56. Subbaraman, B.; de Lange, O.; Ferguson, S.; Peek, N. The Duckbot: A system for automated imaging and manipulation of duckweed. PLoS ONE 2024, 19, e0296717. [Google Scholar] [CrossRef] [PubMed]
  57. Nawoya, S.; Ssemakula, F.; Akol, R.; Geissmann, Q.; Karstoft, H.; Bjerge, K.; Mwikirize, C.; Katumba, A.; Gebreyesus, G. Computer vision and deep learning in insects for food and feed production: A review. Comput. Electron. Agric. 2024, 216, 108503. [Google Scholar] [CrossRef]
  58. Manduca, G.; Wilson, L.T.; Stefanini, C.; Romano, D. Automated detection of larval stages of the black soldier fly (Hermetia illucens Linnaeus) through deep learning augmented with optical flow. Inf. Process. Agric. 2025, 12, 501–510. [Google Scholar] [CrossRef]
  59. Nawoya, S.; Geissmann, Q.; Karstoft, H.; Bjerge, K.; Akol, R.; Katumba, A.; Mwikirize, C.; Gebreyesus, G. Prediction of black soldier fly larval sex and morphological traits using computer vision and deep learning. Smart Agric. Technol. 2025, 11, 100953. [Google Scholar] [CrossRef]
  60. Broeckx, L.; Frooninckx, L.; Berrens, S.; Goossens, S.; ter Heide, C.; Wuyts, A.; Dallaire-Lamontagne, M.; Van Miert, S. Macronutrient-Based Predictive Modelling of Bioconversion Efficiency in Black Soldier Fly Larvae (Hermetia illucens) Through Artificial Substrates. Insects 2025, 16, 77. [Google Scholar] [CrossRef]
  61. Feng, S.; Ma, H.; Wu, C.; ArunPrasanna, V.; Liang, X.; Zhang, D.; Chen, B. A machine-learning approach to optimize nutritional properties and organic wastes recycling efficiency conversed by black soldier fly (Hermetia illucens). Bioresour. Technol. 2025, 423, 132254. [Google Scholar] [CrossRef]
  62. Berens, F. Neues Konzept zur Entwicklung Robuster Multisensorsysteme Gegen Störungen der Sensordaten. Ph.D. Thesis, Karlsruher Institut für Technologie (KIT), Karlsruhe, Germany, 2025. [Google Scholar]
  63. Berens, F.; Koschinski, Y.; Badami, M.K.; Geimer, M.; Elser, S.; Reischl, M. Adaptive Training for Robust Object Detection in Autonomous Driving Environments. IEEE Trans. Intell. Veh. 2024, 1–15. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the closed-loop circular indoor agriculture system. Solid, liquid, and gaseous emissions from livestock are converted by insects, microalgae, and duckweed into biomass used in aquaculture. The aquaculture module supplies nutrients for integrated vegetable production following aquaponic principles, forming a recirculating network powered by renewable energy. Nitrogen, phosphorus, and micronutrient flows are indicated in turquoise, while water flows are shown in blue. Efficient operation across trophic levels is supported by machine-assisted process control and computer vision-based monitoring.
Figure 1. Schematic representation of the closed-loop circular indoor agriculture system. Solid, liquid, and gaseous emissions from livestock are converted by insects, microalgae, and duckweed into biomass used in aquaculture. The aquaculture module supplies nutrients for integrated vegetable production following aquaponic principles, forming a recirculating network powered by renewable energy. Nitrogen, phosphorus, and micronutrient flows are indicated in turquoise, while water flows are shown in blue. Efficient operation across trophic levels is supported by machine-assisted process control and computer vision-based monitoring.
Sustainability 18 02700 g001
Figure 2. Comparison of human vision and computer vision. Human perception integrates context, memory, and experience to interpret visual input, whereas computer vision systems rely on digital sensors and algorithmic models to produce statistical predictions.
Figure 2. Comparison of human vision and computer vision. Human perception integrates context, memory, and experience to interpret visual input, whereas computer vision systems rely on digital sensors and algorithmic models to produce statistical predictions.
Sustainability 18 02700 g002
Figure 3. Area-based and object-based visual interpretation methods in computer vision. (a) Area-based approaches, such as semantic segmentation, classify each pixel to generate spatial maps. (b) Object-based approaches detect and localize individual items using bounding boxes or keypoints.
Figure 3. Area-based and object-based visual interpretation methods in computer vision. (a) Area-based approaches, such as semantic segmentation, classify each pixel to generate spatial maps. (b) Object-based approaches detect and localize individual items using bounding boxes or keypoints.
Sustainability 18 02700 g003
Figure 4. Stereo vision for 3D reconstruction. Two cameras capture the same object from slightly different perspectives, resulting in two offset pictures of the same object (1, 2). By computing the disparity between corresponding image points, the object’s 3D position can be estimated through triangulation.
Figure 4. Stereo vision for 3D reconstruction. Two cameras capture the same object from slightly different perspectives, resulting in two offset pictures of the same object (1, 2). By computing the disparity between corresponding image points, the object’s 3D position can be estimated through triangulation.
Sustainability 18 02700 g004
Figure 9. Computer-vision system for poultry monitoring within the circular indoor agriculture demonstrator. (a) Schematic overview of the image-acquisition and processing pipeline, including object detection, multi-frame tracking, and behavioral feature extraction. (b) Representative RGB images recorded under real farming conditions, illustrating detection and tracking of individual birds, as well as feeding detection.
Figure 9. Computer-vision system for poultry monitoring within the circular indoor agriculture demonstrator. (a) Schematic overview of the image-acquisition and processing pipeline, including object detection, multi-frame tracking, and behavioral feature extraction. (b) Representative RGB images recorded under real farming conditions, illustrating detection and tracking of individual birds, as well as feeding detection.
Sustainability 18 02700 g009
Figure 10. Automated system for leafy-green growth quantification in aquaponics. (a) Schematic of the plant-growth monitoring setup, showing a stereo RGB-D camera mounted on a linear rail traversing three aquaponic cultivation units. (b) Representative outputs include an RGB image, a derived canopy-height profile, and a segmentation mask. (c) Exemplary temporal development of growth metrics, including visible leaf area and average canopy height over time.
Figure 10. Automated system for leafy-green growth quantification in aquaponics. (a) Schematic of the plant-growth monitoring setup, showing a stereo RGB-D camera mounted on a linear rail traversing three aquaponic cultivation units. (b) Representative outputs include an RGB image, a derived canopy-height profile, and a segmentation mask. (c) Exemplary temporal development of growth metrics, including visible leaf area and average canopy height over time.
Sustainability 18 02700 g010
Figure 11. Computer-vision-based monitoring of Arthrospira morphology. Microscopic images are acquired (left), individual filaments are detected and outlined (center), and classified into helical (red) and straight (blue) morphology classes. The relative abundance of both classes can be tracked over time (right).
Figure 11. Computer-vision-based monitoring of Arthrospira morphology. Microscopic images are acquired (left), individual filaments are detected and outlined (center), and classified into helical (red) and straight (blue) morphology classes. The relative abundance of both classes can be tracked over time (right).
Sustainability 18 02700 g011
Figure 12. Automated monitoring system for Lemna (duckweed) cultivation. (a) Schematic of the cultivation setup with an RGB camera mounted on a vertical axis. (b) Orthorectified top-down view of the water surface. (c) Surface-coverage estimation using a lightweight statistical regression model based on color and texture features, with five samples for each chosen coverage.
Figure 12. Automated monitoring system for Lemna (duckweed) cultivation. (a) Schematic of the cultivation setup with an RGB camera mounted on a vertical axis. (b) Orthorectified top-down view of the water surface. (c) Surface-coverage estimation using a lightweight statistical regression model based on color and texture features, with five samples for each chosen coverage.
Sustainability 18 02700 g012
Figure 13. Computer-vision pipeline for monitoring black soldier fly (BSF) larvae. RGB images from rearing trays are processed by an object-detection and classification model that assigns individuals to developmental stages. Population-level indicators such as density, growth progression, and spatial distribution are derived from these predictions.
Figure 13. Computer-vision pipeline for monitoring black soldier fly (BSF) larvae. RGB images from rearing trays are processed by an object-detection and classification model that assigns individuals to developmental stages. Population-level indicators such as density, growth progression, and spatial distribution are derived from these predictions.
Sustainability 18 02700 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Werner, F.; Glockow, T.; Meissner, K.; Krüger, M.; Reischl, M.; Niemeyer, C.M. Computer Vision-Based Monitoring and Data Integration in a Multi-Trophic Controlled-Environment Agriculture Demonstrator. Sustainability 2026, 18, 2700. https://doi.org/10.3390/su18062700

AMA Style

Werner F, Glockow T, Meissner K, Krüger M, Reischl M, Niemeyer CM. Computer Vision-Based Monitoring and Data Integration in a Multi-Trophic Controlled-Environment Agriculture Demonstrator. Sustainability. 2026; 18(6):2700. https://doi.org/10.3390/su18062700

Chicago/Turabian Style

Werner, Frederik, Till Glockow, Kai Meissner, Martin Krüger, Markus Reischl, and Christof M. Niemeyer. 2026. "Computer Vision-Based Monitoring and Data Integration in a Multi-Trophic Controlled-Environment Agriculture Demonstrator" Sustainability 18, no. 6: 2700. https://doi.org/10.3390/su18062700

APA Style

Werner, F., Glockow, T., Meissner, K., Krüger, M., Reischl, M., & Niemeyer, C. M. (2026). Computer Vision-Based Monitoring and Data Integration in a Multi-Trophic Controlled-Environment Agriculture Demonstrator. Sustainability, 18(6), 2700. https://doi.org/10.3390/su18062700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop