Next Article in Journal
Qualitative Analysis of Glass Microfragments Using the Combination of Laser-Induced Breakdown Spectroscopy and Refractive Index Data
Next Article in Special Issue
Vision beyond the Field-of-View: A Collaborative Perception System to Improve Safety of Intelligent Cyber-Physical Systems
Previous Article in Journal
Achievable Rate Maximization for Multi-Relay AF Cooperative SWIPT Systems with a Nonlinear EH Model
Previous Article in Special Issue
Fault-Adaptive Autonomy in Systems with Learning-Enabled Components
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Transformation in Smart Farm and Forest Operations Needs Human-Centered AI: Challenges and Future Directions

1
Human-Centered AI Lab, Institute of Forest Engineering, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences Vienna, 1190 Wien, Austria
2
xAI Lab, Alberta Machine Intelligence Institute, University of Alberta, Edmonton, AB T5J 3B1, Canada
3
DAI Lab, Technical University Berlin, 10623 Berlin, Germany
4
Institute of Agricultural Engineering, Department of Sustainable Agricultural Systems, University of Natural Resources and Life Sciences Vienna, 1180 Wien, Austria
5
Institute of Forest Growth, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences Vienna, 1180 Wien, Austria
6
Institute of Forest Engineering, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences Vienna, 1180 Wien, Austria
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(8), 3043; https://doi.org/10.3390/s22083043
Submission received: 16 March 2022 / Revised: 6 April 2022 / Accepted: 13 April 2022 / Published: 15 April 2022

Abstract

:
The main impetus for the global efforts toward the current digital transformation in almost all areas of our daily lives is due to the great successes of artificial intelligence (AI), and in particular, the workhorse of AI, statistical machine learning (ML). The intelligent analysis, modeling, and management of agricultural and forest ecosystems, and of the use and protection of soils, already play important roles in securing our planet for future generations and will become irreplaceable in the future. Technical solutions must encompass the entire agricultural and forestry value chain. The process of digital transformation is supported by cyber-physical systems enabled by advances in ML, the availability of big data and increasing computing power. For certain tasks, algorithms today achieve performances that exceed human levels. The challenge is to use multimodal information fusion, i.e., to integrate data from different sources (sensor data, images, *omics), and explain to an expert why a certain result was achieved. However, ML models often react to even small changes, and disturbances can have dramatic effects on their results. Therefore, the use of AI in areas that matter to human life (agriculture, forestry, climate, health, etc.) has led to an increased need for trustworthy AI with two main components: explainability and robustness. One step toward making AI more robust is to leverage expert knowledge. For example, a farmer/forester in the loop can often bring in experience and conceptual understanding to the AI pipeline—no AI can do this. Consequently, human-centered AI (HCAI) is a combination of “artificial intelligence” and “natural intelligence” to empower, amplify, and augment human performance, rather than replace people. To achieve practical success of HCAI in agriculture and forestry, this article identifies three important frontier research areas: (1) intelligent information fusion; (2) robotics and embodied intelligence; and (3) augmentation, explanation, and verification for trusted decision support. This goal will also require an agile, human-centered design approach for three generations (G). G1: Enabling easily realizable applications through immediate deployment of existing technology. G2: Medium-term modification of existing technology. G3: Advanced adaptation and evolution beyond state-of-the-art.

Graphical Abstract

1. Introduction

Cyber-physical systems (CPS), robotics, sensors, data management in general, and artificial intelligence (AI) and machine learning (ML) methods in particular, will significantly change process chains in agriculture and forestry.
Digital transformation in future smart agriculture and forestry requires a human-centered AI approach that incorporates sociological, ethical, and legal issues. Natural intelligence should be augmented—not replaced—by artificial intelligence, like “power steering for the brain”. This is where the human-in-the-loop approach comes in, because this approach incorporates the human experience, prior knowledge, and conceptual understanding of human experts to augment, enhance, and strengthen human capabilities with AI—rather than replacing humans.
In this paper, we first justify why agriculture and forestry are among the most important application areas for all humanity. We then provide definitions of AI and HCAI to facilitate a common understanding, and describe the three main paradigms of ML (supervised learning, unsupervised learning, reinforcement learning) to provide a good introduction for the interested layperson. We then describe the state-of-the-art in autonomous, automated, assisted, and augmented AI systems, giving examples from agriculture and forestry for each classification. Then, in the main body, we introduce three pioneering research areas, namely, (1) intelligent sensor information fusion, (2) robotics and embodied intelligence, and (3) augmentation, explanation, and verification. Finally, we summarize the core ideas of human-centered AI and present two examples: the farmer-in-the-loop and the forester-in-the-loop.

1.1. Why Both Agriculture and Forestry Are Important

When you eat, it is about farming; when you breathe, it is about forestry. Both agriculture and forestry are vital to everyone on our planet. Both sectors have long and complex business process chains, and in no other sector could new AI technologies contribute more in the future to achieving the Sustainable Development Goals [1] (see Figure 1).
Sustainable development can be seen as the intersection of the goals assigned to the ecological, economic, and social systems. In this context, the systems are interrelated, and simply maximizing the goals for one system does not lead to sustainability; i.e., the impacts of all systems in their entirety must be considered to achieve sustainability.
Barbier and Burgess (2017) [3] note that focusing only on the goals of one system can have consequences for the other systems. Furthermore, they describe sustainable development as a process of tradeoffs between systems. For example, maximizing profits may be efficient for the economic system but threaten biological productivity and biodiversity through environmental degradation. Moreover, the digital transformation process involves many more aspects than just the above systems and their goals. The interactions between systems and actors must always be taken into account.
The European Union (EU) has therefore set out strategies to improve sustainability and meet the Sustainable Development Goals (SDGs). One of the main objectives in this regard is to fully connect farmers and rural areas to the digital economy EC:2017:FoodFarming. With this in mind, the European Commission overhauled the existing Common Agricultural Policy (CAP) to achieve a smarter, modern, and sustainable future for food and agriculture. This resulted in the Green Deal, which is part of the Commission’s strategy to implement the Sustainable Development Goals. The Green Deal also introduced the Farm to Fork strategy, which aims to transform the way food is produced and consumed in the EU. It aims to make the food system healthy, fair, and environmentally friendly.
In addition, recital 22 of Regulation (EU) 2021/1119 [4] and the fact sheet [5] state that the Commission will promote new green business models to reward land managers for reducing gas emissions in general and cutting carbon emissions in particular. This provides positive reinforcement for all land managers regarding the shift to green business models.
Last but not least, the vast amounts of data collected through “smart agriculture” will help farmers farm more efficiently and conserve natural resources [6]. As a result, food will be produced in a more sustainable and environmentally friendly way. The same principle applies to forestry. If the forest can be used in a more sustainable way, allowing not only the trees but also the soil and vegetation itself to regenerate, carbon reduction can be maximized.
The analysis, evaluation, and optimization of sustainable and resource-conserving production processes in agriculture require suitable methods that map the diversity of processes, represent the temporal and spatial variability, and make processes technically controllable. Life cycle assessment (LCA) methods are already available for this purpose. In the future, these methods will also be supported by AI, but let us first take a look at what AI actually is.

1.2. Artificial Intelligence

Artificial intelligence is one of the oldest fields of computer science and was extremely popular in its early days in the 1950s. However, the requirements quickly reached the limits of the computing power of digital computer systems at the time. This made AI interesting in theory but a failure practically and especially economically, which inevitably led to a decline in interest in AI in the 1980s. AI only became very popular again a decade ago, driven by the tremendous successes of data-driven statistical machine learning.
Artificial neural networks have their origins in the artificial neurons [7] developed by McCulloch and Pitts (1943). Today’s neural networks consist of very many layers and have an enormous number of connections, and use a special form of compositionality in which features in one layer are combined in many different ways to produce more abstract features in the next layer [8]. The success of such AI, referred to as “deep learning”, has only been made possible by the computing power available today. The increasing complexity of such deep learning models has naturally led to drawbacks and new problems in the comprehensibility of results. This lack of comprehensibility can be very important, especially when using such AI systems in areas that affect human life [9].
Many fundamental AI concepts date back to the middle of the last century. Their current success is actually based on a combination of three factors: (1) powerful, low-cost, and available digital computing technology, (2) scalable statistical machine learning methods (e.g., deep learning), and (3) the explosive growth of available datasets.
To date, AI has reached levels of popularity and maturity that have let it permeate nearly all industries and application areas, and it is the main driver of the current digital transformation—due to its undeniable potential to benefit humanity and the environment. AI can definitely help find new solutions to our society’s most pressing challenges in virtually all areas of life: from agriculture and forest ecosystems, which affect our entire planet, to the health of every individual.
For all its benefits, the large-scale adoption of AI technologies also holds enormous and unimagined potential for new, unforeseen threats. Therefore, all stakeholders—governments, policymakers, and industry—along with academia, must ensure that AI is developed with knowledge and consideration of these potential threats. The security, traceability, transparency, explainability, validity, and verifiability of AI applications must always be ensured at all times [9]. However, how is AI actually defined now, what is trustworthy AI, and what is human-centered AI?
AI is difficult to define because the term “intelligence” itself is difficult to define [10]. We therefore follow a very general and widely accepted definition in this paper, namely, the one that describes AI simply as “automation of intelligent behavior”. For more details, refer to Russel and Norvig (2020) [11].
For trustworthy AI, it is imperative to include ethical and legal aspects, which is a cross-disciplinary goal, because all trusted AI solutions must be not only ethically responsible but also legally compliant [12]. Dimensions of trustworthiness for AI include: security, safety, fairness, accountability (traceability, replicability), auditability (verifiability, checkability), and most importantly, robustness and explainability; see [13].
Human-centered AI we define as a synergistic approach to align AI solutions with human values, ethical principles, and legal requirements to ensure safety and security, enabling trustworthy AI. This HCAI concept is now widely supported by renowned institutions (Stanford Human-Centered AI Institute, Berkeley Center for Human-Compatible AI, Cambridge Leverhulme Center for the Future of Intelligence, Chicago Human AI Lab, Utrecht Human-centered AI, Sydney Human-Centered AI Lab) and world-leading experts, such as Ben Shneiderman, Fei Fei Li, Joseph A. Konstan, Stephen Yang, and Christopher Manning, to name a few [14].
The inclusion of a human-in-the-loop in interactive machine learning [15] is thereby not only helpful to increase the performances of AI algorithms, but also highly desirable to counter the earlier fears and anxieties that artificial intelligence automates everything, replaces and displaces humans, and pushes them into passive roles [16].
In addition, integrating a human-in-the-loop (expert-in-the-loop) has many other advantages: Human experts excel at certain tasks by thinking multimodally and embedding new information in a conceptual knowledge space shaped by individual experience and prior knowledge. Farmers and foresters can build on an enormous amount of prior knowledge. Our guiding principle, therefore, is that using conceptual knowledge as a guiding model of reality can help develop more robust, interpretable, and less biased AI models [17]. This can (a) provide advanced contributions to the international research community, (b) find new applications in various AI solutions, and (c) help add value to real-world applications, especially in areas that impact human life (agriculture, forestry, climate, health).
For all these approaches, machine learning is the “workhorse”, and here we are in the fortunate position that the term machine learning is very definable, unlike the term AI. Since a basic understanding is very important here, we will discuss it in more detail in the following chapter.

1.3. Machine Learning—The Workhorse of AI

It is important to note that the historical origin of the current successful statistical machine learning lies in the foundations described by Pierre Simon de Laplace (1781) [18], inspired by the work of Thomas Bayes (1763) [19].
Let us consider n data contained in a set D = x 1 : n = { x 1 , x 2 , , x n } . Let be the likelihood p ( D | θ ) , and specify a prior p ( θ ) . Consequently, we can compute the posterior:
p ( θ | D ) = p ( D | θ ) p ( θ ) p ( D )
The inverse probability allows us to learn from data, infer unknowns, and make predictions [20]. Here is the entry point where a human-in-the-loop can already help: in defining the prior. A human-in-the-loop is therefore invaluable, because an expert with many years of experience has this “Bayesian estimate” ready in a fraction of a second in certain situations [21,22,23].
The question of “where the prior comes from” and the mere assumption of the same has been a point of attack for frequentist (or classical) statisticians, the opponents of the Bayesian approach [24]. This skepticism enormously delayed the development of “Bayesian” machine learning. It was only through convincing performances in practical applications, particularly in the area of deep learning, that statistical machine learning was finally not only accepted, but gained popularity as the workhorse of AI today [25,26,27].
The field of machine learning began seven decades ago with the idea of developing algorithms that could automatically learn from data to gain knowledge from experience and incrementally improve their learning behavior. It is one of the fastest growing fields at the nexus of computer science and statistics, and at the center of artificial intelligence and data science. Today, the best practical implementations are autonomous vehicles, recommendation systems, and natural language understanding [28]. Ultimately, to reach a level of “usable intelligence”, we need (1) to learn from data, (2) to extract knowledge and new insights, (3) to generalize ideally from a small number of examples (4), to fight the curse of dimensionality, and (5) to disentangle underlying explanatory factors of the data, the causal factors—i.e., to make sense of the data in the context of an application domain [29]. Machine learning algorithms are typically divided into three main categories: (1) supervised, (2) unsupervised, and (3) reinforcement learning:
  • Supervised learning includes algorithms that learn from human-labeled data, e.g., support vector machines (SVM), logistic regression, naive Bayes, random forests, and decision trees [30]. A typical example is a model that receives as input a set of images of crops and a set of images of weeds. Each image is labeled by a human, and the neural network’s task is to learn the features that help distinguish between these two classes [31], which is nowadays performed by neural networks (deep learning approaches) using semi-supervised learning [32]. Typically, modern neural networks decompose inputs into lower-level image concepts, such as lines and curves, and then assemble them into larger structures to internally represent what distinguishes the two (or more) types of images they encounter during their learning process [33]. A trained network that has performed well on one classification task is expected to classify similar inputs (new images of cats and cars) with acceptable success; on the other hand, it is not expected to perform well on other tasks, so the generalization problem mentioned above shows the limits of the capacities of the algorithm.
  • Unsupervised learning includes algorithms that do not require human labels. However, of course, some assumptions are always made about the structure of the data, typically after a visualization process [34,35]. Clustering algorithms that categorize data based on their intrinsic features were the first examples of unsupervised machine learning algorithms. More recent neural network architectures, called autoencoders, compute more compact representations of the input data in a space of lower dimensions than that to which the input belongs. These models can be easily used as generative models after the convergence of the learning process, thereby demonstrating their generalization ability in a constructive way. Even if the human has no direct influence on these algorithms, there is interference in the form of configuration of some meta-parameters; even the choice of the appropriate unsupervised algorithm is made by the designer. Some basic knowledge of the data is required, and techniques such as cross-validation are used to find good parameters and avoid over-fitting problems.
  • Reinforcement learning follows a very different paradigm. The RL algorithm (often called agent here) bases its decisions on both the data and the human input, but the human input is not as direct as labeling [36,37]. To better understand this, it is important to remember that the tasks performed by reinforcement learning algorithms are not about categorizing the input data and learning the meaning of each feature, but rather about learning how to efficiently navigate to a goal. What the algorithm needs to learn is a strategy for reaching the desired state from a starting point, guided not by a predefined plan but by a reward it will receive if it completes either the end goal or key subtasks. The learning process includes several iterations (episodes) in which a so-called agent explores an environment randomly at the beginning and learns from its mistakes, and it eventually finds itself in unprofitable states (from the point of view of the reward). Over time, the agent gathers knowledge about which states are more successful and exploits them to obtain the greatest possible reward. Current state-of-the-art reinforcement learning algorithms are able to find strategies in complex games unknown to humans [38] and also use multiple agents communicating with each other to enable even more efficient strategy discovery [39].
The international ML community aimed from the very beginning to develop algorithms that automatically learn from datasets and improve their performances over time—without a human being involved [40]. This fully automated machine learning (aML)—“press the button and wait for the results”—works well when computational capacity is sufficient and, most importantly, large amounts of training data are available [41], and full autonomy is needed (e.g., in autonomous driving, or in scenarios where humans are absent). The buzzword “big data” is in no way negative, but rather necessary for such automatic learning approaches. However, in some situations we do not have the necessary big data and/or we are faced with complex problems, and in certain applications the use of fully automated approaches is very difficult to realize, e.g., for ethical reasons [42] or legal reasons [43]. Therefore, interactive machine learning (iML) approaches that integrate a human into the loop (e.g., a human kernel [44]), or incorporate a human directly into the machine learning algorithm [45], thereby leveraging human cognitive abilities, are promising approaches. Sometimes (not always, of course) this human-in-the-loop can then bring in human experience, expertise, and conceptual understanding to solve problems that would not be solvable automatically alone, or improve algorithms stuck in suboptimal solutions, not knowing how to improve [46].

2. State-of-the-Art AI Technologies

2.1. Classification of AI Technologies

If we want to give an overview of AI technologies, it is reasonable to classify them according to degree of autonomy:
  • Autonomous AI systems that automate decisions without any human intervention, e.g., fully autonomous self driving cars [47] and autonomous drones [48].
  • Automated AI systems that perform labor-intensive tasks requiring certain intelligence and complete them automatically within a certain domain. These have clear goals and tasks. Examples are industrial robotic process automatization [49] and automated forest management [50].
  • Assisted AI systems that help humans perform repetitive routine tasks faster and both quantitatively and qualitatively better, e.g., ambient assisted smart living [51] and weather forecasting.
  • Augmenting AI systems that put a human in the loop or at least enable a human to be in control in order to augment human intelligence with machine intelligence and the opposite. Examples range from simple, low-cost augmented reality applications [52] to augmented AI in agriculture [53] and interactive machine teaching concepts [54].
Today, AI can be successfully applied in virtually all application areas [55]. Due to resource conservation and the demand for sustainability, precision concepts, similar to precision medicine, are gaining more attention. These include a very wide range of different information technologies that are already used in many agricultural and forestry operations worldwide [56,57,58,59]. In this context, satellite technology, geographic information systems (GIS), and remote sensing are also very important to improve all functions and services of the agricultural and forestry sectors [60]. Available tools include mobile applications [61], a variety of smart sensors [62], drones (unmanned aerial vehicles, UAVs) [63], cloud computing [64], Internet of Things (IoT) [65], and blockchain technologies [66]. An increasingly important and often underappreciated area is the provision of energy, making alternative low-energy approaches imperative [67].
All of these technologies make it possible to process information about the state of the soil, plants, weather, or animals in a shared network in quasi-real time and make it available for further processes regardless of location. This means that today’s agricultural and forestry systems are being expanded to include other relevant processes/services, and additional datasets are being created for quantitative and qualitative information along the entire value chain for plant production and animal husbandry products and food safety (“from farm to fork”). Let us now show a concrete application example for each type of our four AI classes.

2.2. Autonomous AI Systems

“Full automation is one of the hottest topics in AI and could lead to fully driverless cars in less than a decade”, stated a 2015 Nature article [68]. Fully autonomous vehicles are indeed the popular example of AI and are also readily representable, as the Society of Automotive Engineers (SAE) has provided very descriptive definitions for levels of automation in its standards. Levels of automation emerged as a way to represent gradations or categories of autonomy and to distinguish between tasks for machines and tasks for humans. In a very recent paper, however, Hopkins and Schwanen (2021) [69] argued that the current discourse on automated vehicles is underpinned by a technology-centered logic dominated by AI proponents, and point to the benefits of a stronger human-centered perspective.
However, compared to car driving, the complexity of processes in agriculture and forestry is disproportionately higher. Agricultural and forestry systems include virtually all processes for the production of f5 (food, feed, fiber, fire, fuel). In this context, the production processes take place both indoors (buildings and facilities for people, livestock, post-harvest, and machinery) and outdoors, and face much diversity in terms of soil, plants, animals, and people. The temporal resolution of process phenomena varies over an extremely wide range (from milliseconds, e.g., moving machinery, to many years, e.g., growth of trees and changes in soil).

2.2.1. Examples from Agriculture

A major problem in agriculture has always been weed control. In their daily life cycle, plants and weeds compete with each other for soil nutrients, water from the soil, and sunlight. If weeds are left untouched, increased weed growth can affect both crop yields and crop quality. Several studies have already shown that these effects can be significant, ranging from 48 to 71%, depending on the crop [70].
Moreover, in certain cases, crop damage can be so high that the entire yield is not suitable for the market [71]. To prevent this, weed control has emerged as a necessity. Furthermore, with the ever-increasing trends in crop yield production, the demand for process optimization, i.e., reduction of energy losses, herbicide use, and manual labor, is becoming more and more urgent. To meet the above requirements, traditional methods of weed control need to be changed. One of the possible ways to achieve this is to introduce systems that significantly reduce the presence of human labor, the use of herbicides, and mechanical treatment of the soil by focusing only on specific areas where and when intervention is needed. The novel approach based on the above principles is called smart farming or Agriculture 4.0. Moreover, this type of system should involve the performance of agricultural tasks autonomously, i.e., without human intervention, relying entirely on its own systems to collect the data, navigate through the field, detect the plants/weeds, and perform the required operation based on the results of the collected data [72].
These types of systems are known as autonomous agricultural robot systems. Each autonomous agricultural robotic system, e.g., an autonomous robot for weed control, consists of four main systems, i.e., steering/machine vision, weed detection, mapping, and precision weed control [72]. Most agricultural robots are developed for outdoor use, though some of them can operate indoors [73]. Precise navigation of these devices is provided throughout the global navigation satellite systems (GNSS) and real-time kinematics (RTK) [74,75].
However, under certain conditions, localization accuracy may fall below the required thresholds, and then autonomous robotic systems must rely on machine vision and indoor positioning systems, such as adaptive Monte Carlo localization and laser scanners [76]. The above two technologies are widely used and commercially available. Weed control in the row is mainly done by the four conventional weed control methods, i.e., electric, chemical, thermal, and mechanical weed control methods. Currently, weed detection and identification is the most challenging issue. Several studies have addressed this issue, with detection accuracy varying from 60 to 90% under ideal test conditions [72]. Thanks to extensive remote sensing technologies and data processing software, the development of weed maps has become a reality, and together with machine vision, a powerful tool for weed detection.
Some of the most representative autonomous agricultural robotic systems are: The robotic system for weed control in sugar beets developed by Astrand et al. (2002) [77]. The robot consisted of two vision systems used for crop guidance and detection, and a hoe for weed removal. A front camera with two-row detection at 5-meter range and near-infrared filter was used for row detection and navigation, while a second color camera mounted inside the robot was used for weed detection. Initial trials showed that color-based plant detection was feasible and that the robot’s subsystems could functionally work together. The BoniRob autonomous multipurpose robotic platform (see Figure 2) with wavelength-matched illumination system for capturing high-resolution image data was used for weed detection and precision spraying [78], and for ground intervention measurements [79]. Autonomous navigation along crop rows was achieved using 3D laser scans or by using global navigation satellite systems (GNSS). Lamm et al. (2002) developed a robotic system for weed control in cotton fields that is able to distinguish weeds from cotton plants and precisely apply herbicides. A machine vision algorithm was used to determine the diameter of the inscribed leaf circle to identify the plant species. Field tests showed a spray efficiency of 88.8 percent [80].
Similarly to Astrand et al. (2002) [77], Blasco et al. (2002) [81] developed two machine vision system robots for weed control. The machine vision systems are used separately, one for in row navigation and the second one for the weed detection. Precise target weeding was done with an end-effector which emitted electrical charge [81]. Bawden et al. (2017) [82] have developed an autonomous robot platform with a heterogeneous weeding array. The weeding mechanism is based on machine vision for weed detection and classification, together with weeding array which combines precise spraying and hoeing methods for weed destruction [82].
As can be seen, robotic technologies are changing current practices in agricultural technology, particularly in autonomous weed control. The steady increase in research and development in this area will inevitably have a significant impact on traditional agricultural practices.

2.2.2. Examples from Forestry

Timber harvesting is physically demanding and risky, as forest workers often work manually and are exposed to heavy and fast-moving objects such as trees, logs, and dangerous machinery. Over time, timber harvesting has become more mechanized to increase worker safety, productivity, and environmental sustainability.
In the context of increasing productivity through machine use, Ringdahl (2011) [83] found that human operators can become a bottleneck because it is not possible to work as fast as the potential capacity of machines. In trafficable terrain, harvesters and forwarders represent the highest level of mechanization, and they are basically manually controlled by human using joysticks. One way to overcome this human limitation of machine capacity is to change forest working methods in such a way that human activities are reduced to a minimum or are no longer required, like in autonomous vehicles [84]. While autonomous robotic systems are already being used in controlled workspaces such as factories or in simple agricultural environments, the use of autonomous machines in more complex environments, such as forests, is still in the research and development stage. One of the biggest challenges is on-the-fly navigation in the forest.
The most common approach for autonomous navigation in open terrain such as agriculture is based on global navigation satellite systems (GNSS). However, the GNSS signal absorption by the forest canopy leads to position errors of up to 50 m and more, which requires other solutions independent of the GNSS signal [85]. In addition to localization of the forest machine’s own position, the complex terrain and obstacles such as understory, and above all, trees, must also be taken into account when navigating autonomously in forests. In recent years, methods in the field of remote sensing have increasingly been developed to generate digital twins of forests based of terrestrial, airborne, or spaceborne sensor technologies. Gollob et al. (2020) [86] showed that personal laser scanning (PLS) is able to capture and automatically map terrain information, individual tree parameters and entire stands in a highly efficient way. This data can serve as a navigation basis for autonomous forest machines and the optimized operational harvest planning [85].
Rossmann (2010) [85] showed that an initial guess of the forest machine position can be made using an “imprecise” GNSS sensor. 2D laser scanners or stereo cameras on the forest machine (e.g., [87,88,89]) can detect tree positions in the near neighborhood of the machine (local tree pattern). The position of the forest machine can be determined efficiently and precisely by means of tree pattern matching between the stand map (global tree pattern from, e.g., PLS) and the local tree pattern [85]. The initial guess of the machine position with GNSS helps to make the pattern matching more time efficient.
Regardless of the challenging navigation, research is also being done on the type of locomotion of autonomous forest machines. Machines that seemed futuristic just a few years ago are already in use or available as prototypes. For example, the concept of animals moving slowly from branch-to-branch was used by New Zealand scientists and engineers to build a tree-to-tree locomotion machine (swinging machine) [84] (see Figure 3). To date, the swinging harvester has been radio-controlled—but with the challenges shown in terms of navigation and sensor fusion, the path to an autonomous, soil-conserving forestry machine is mapped out.

2.3. Automated AI Systems

2.3.1. Example from Agriculture

As previously mentioned, autonomous AI systems are relatively advanced and are constantly being developed. These developments and upgrades lead to higher efficiency of the machines. In addition, more and more systems in modern tractors and harvesters are becoming fully automated to minimize the operator’s workload. The two most important domains of automation are situation awareness and process monitoring. For example, machine vision guidance systems are already widely used in modern tractors and harvesters, allowing the machine to automatically align itself with the harvest line without the operator’s help, so that humans can focus on other processes while they do so [90]. Infrared 3D camera systems on harvesters continuously monitor and control bin placement while allowing the operator to focus on the harvesting process [91].
Process monitoring is particularly pronounced in harvesting operations, where speed must be constantly controlled and adjusted according to the operation being performed [92]. The precise application of fertilizers and herbicides is also usually monitored and controlled automatically throughout the process. For this purpose, data from a global navigation satellite system (GNSS) as a guidance system with real-time sensor technology (e.g., Claas Crop Sensor [93]) are communicated among the tractor, the application device, and the task controller via the terminal, which has been done for some time [94].

2.3.2. Examples from Forestry

Cable-yarding technologies are the basis for efficient and safe timber harvesting on steep slopes. To guarantee low harvesting costs and low environmental impacts on remaining trees and soil, the machine position and cable road must be carefully planned. For this planning, usually only imprecise information about the terrain and the forest stands is available. If stand and terrain information were collected with traditional measuring devices such as calipers, hypsometers, and theodolites, these measurements would be labor intensive, time consuming, prone to various errors, and thus limited in their spatial and temporal extent [95].
Thus, the cable road layout is still determined by experts based on rules of thumb and empirical knowledge [96]. Rules for this are formulated, for example, in Heinimann (2003) [97]. Automatic methods (optimization methods) to solve this problem have already been formulated in countries such as the USA and Chile [98,99,100]. However, these optimization or planning methods are largely based on the assumption of clear-cutting and do not use modern sensor technology to capture individual tree and terrain data. For example, high-resolution 3D data in form of a digital twin of the forest combined with well-known optimization functions and expert knowledge is a key factor to optimizing planning of timber harvesting. In this way, automatically optimized cable road planning can help to minimize the environmental impact and the costs for cable yarding (e.g., [96,101]).
In terms of cable yarding, there are also other examples for automation that are already being used in practice: Most cable yarding systems follow a typical scheme (work phases) of unloaded out, accumulate load, loaded in, and drop load on landing. Two of these phases, unloaded and loaded travel, have been automated; thus, the operator can work in the meantime with an integrated processor [88]. Pierzchała et al. (2018) [102] developed a method for automatic recognition of cable yarding work phases by using multiple sensors on the carriage and tower yarder. Further automation steps in cable yarding would be conceivable in the future; for example, the carriage could be equipped with additional orientation sensors, such as laser scanners or stereo cameras, for orientation.

2.4. Assisted AI Systems

2.4.1. Example from Agriculture

Assisted AI systems in agriculture are tightly overlapped with automated AI systems. In agricultural applications, machines can independently perform certain repetitive tasks without the human intervention. However, in the decision-making loop, humans are those one who make final decisions [103]. For example, implementation of a wide variety of non-invasive sensors in fruit and vegetable processing, e.g., drying processes, merged together with AI technologies, can be used to control drying processes and changes in shape of vegetables and fruits, and to predict optimum drying process parameters [104]. Several systems, e.g., situation awareness systems such as machine vision guidance systems, though performing their work automatically, can be still manually overridden by an operator [105]. An example is the precision application of fertilizer and pesticides: sprayers can work in fully manual mode [90]. In modern tractors, advanced steering control systems can adjust steering performance in order to suit current conditions, etc. [106]. Furthermore, fuel consumption efficiency can be improved with the above-mentioned technologies [107].

2.4.2. Examples from Forestry

Operating with forestry cranes requires a lot of knowledge and experience to be productive with a low impact on the environment. Furthermore, trained forestry machine operators are essential for efficient timber production, in particular to reduce harvesting damage to the remaining trees and reduce machine downtime. Ovaskainen et al. (2004) [108] has shown that the productivity of trained harvester (Cut-to-length (CTL)) operators varies by about 40% under similar forest stand conditions. The study hypothesizes that efficiency differences are related to aspects of operator crane experience based on deliberate practice of motor skills, situational awareness, and visual perception. The state-of-the-art in crane control is the use of two analog joysticks, which are controlled by the two hands and/or the fingers. The joysticks provide electrical or hydraulic signals that control the flow rate of the hydraulic system and thus enable precise movement of the individual hydraulic cylinders. Motor learning is the key to smooth crane movements and harvester head control. Forest machinery machinists make approximately 4000 control inputs/h, many of which are repeated again and again but always have to be applied in a targeted manner [109]. Purfürst (2010) [110] found that learning to operate a harvester took on average 9 months. Furthermore, the operator must also master decision making and planning in order to achieve an appropriate level of performance. In summary, it can be stated that harvester/forwarder operation, especially crane operation, is ergonomically, motorically, and cognitively very demanding. To improve this, existing forestry machines are constantly being improved. Modern sensors (e.g., diameter and length measurement) combined with intelligent data processing can help to assist certain operations, such as processing stems into logs by automatically moving the harvester head to predetermined positions depending on the stem shape and log quality. On the one hand, this reduces the workload of the harvester operator, and on the other hand, optimizes the profit on the timber. Other good examples of how intelligent assistance can make work easier and also more efficient for the machine operator are the intelligent crane control systems from John Deere Forestry Oy (IBC: Intelligent Boom Control [111], see Figure 4), Komatsu (Smart Crane [112]), and Palfinger (Smart Control [113]). In such systems, the crane is controlled via the crane tip (harvester head or grapple), and the crane’s movement is automatically done via algorithms. Therefore, the operator is not controlling individual rams, but only needs to concentrate on the crane tip and control it with the joysticks. The system also dampens the movements of the cylinders and stops jerky load thrusts in the end positions, which enables jerk-free operation. The smart control results in less fatigue for the machinist, making them more productive overall. Results from Manner et al. (2019) [114] showed that if the crane is controlled with IBC compared to conventional crane control, the machine working time for the forwarder is 5.2% shorter during loading and 7.9% shorter during unloading. It has already been shown that the use of such a smart crane control system makes it much easier to learn how to operate harvester or forwarder machines [88,115].

2.5. Augmenting AI Systems

2.5.1. Example from Agriculture

A good systematic for AR applications is given by Hurst et al. (2021) [117] (see Figure 5 and Figure 6), which has subdivisions into (1) marker-based, (2) markerless (location-based), (3) dynamic augmentation, and (4) complex augmentation, subdivided by AR type. Based on the four classifications, there are a lot of potential applications of AR in agriculture.
Augmented reality (AR) enables the combination of a real environment and an interactive experience, e.g., synthetically generated information. It is necessary to clearly distinguish augmented reality (AR) from virtual reality (VR). Virtual reality uses a synthetically generated environment to replace the real environment, whereas augmented reality enhances the real environment with synthetically generated data [118]. Both technologies have found many applications in a wide variety of domains [119,120,121]. However, agricultural applications require the technology to be even more user-friendly, for example, by replacing smartphones and tablets with smart glasses [122]. This would provide operators with hands-free capabilities and a less constrained user interface. Although there are still some technical and technological drawbacks, smart glasses have emerged as promising potential candidates for the main platform for AR.
Meanwhile, head-mounted AR displays, for example, are being used to help farmers detect plant diseases [123]. Through the head-mounted display camera that observes the plants, images of the plant leaves are captured in real time and sent to the cloud server for analysis, which also provides a lot of room for detecting defects/anomalies. After post-processing the data in the cloud, the results are transmitted back to the head-mounted display. In this way, a augmented reality head-mounted display allows less experienced farmers to inspect the field more efficiently and quickly in search of infected plants. In addition, such technology can also help train and educate the farmer directly in the field. The farmer’s knowledge remains invaluable, as the expert can contribute their domain knowledge to future machine teaching AI solutions.
AR smart glasses are already used for scanning QR codes during characteristic livestock activities such as feeding and milking [124]. The initial results show that the above-mentioned glasses can provide significant help to farmers by enabling real-time consultation, data collection, data sharing, etc., and proved to be a useful tool for herd management and feeding, where many more applications will bring benefits to farmers in the future, especially through AI.
AR smart glasses (see Figure 7) [125] have already been implemented to create a system that assists the operator during field operations such as plowing and fertilizing. This can minimize the strain on the operator caused by constantly tracking maps, light bars, etc., which can be especially pronounced in large, irregularly shaped fields. With the help of AR glasses, the operator thus obtains data on their trajectory, speed, etc., which is superimposed on the data of the treated surfaces and largely simplifies the operator’s work. It should be mentioned that this system can be used both inside the machine and outside it. This system has proven to be very useful in fertilizing, spraying, and plowing operations.
Besides the above-mentioned examples, augmented reality-related research is on the rise, mainly focusing on greenhouse management [126], weed management [127], and farmer management support [128].

2.5.2. Examples from Forestry

Automated and assisting systems combined with a variety of modern sensors increase the productivity and quality of forest work. Even if we are far from using the full potential of smart technology in forest machines, the operator is already receiving a large amount of information compared to traditional machines. In addition, forestry machines are also increasingly networked with each other (e.g., harvesters and forwarders) and also have possibilities to retrieve or exchange information from the Internet. On the one hand, this abundance of data helps to train autonomous, automatic, or assisting processes; but on the other hand, it leads to a challenge in mediating the information to the operator. However, in addition to many of the benefits of “big data”, negative effects can also occur in the form of divided attention, information overload, and operator stress. A further increase in the information burden on the operator can potentially lead to higher chances of human errors that could harm not only the operator, but also people, machines, and objects in the neighborhood of the machine [129,130,131]. Augmented reality can help to provide the information better and more efficiently to the machine operator. Augmented reality, which matches data generated by sensors and algorithms with the user’s actual perception of the environment, can improve the operator’s awareness of the machine, the environment, and the specific work progress. Sitompul and Wallmyr (2019) [129] defined for forest applications that augmented information could be provided for two types of operation: in-cabin operation and remote operation. Traditionally, heavy machines, such as forestry machines, are operated in the cab. In recent years, there have been repeated efforts and studies to move the operator from working in the cab to working in a remote control station, which is called teleoperation. With regard to cab work, Kymäläinen et al. (2017) [132] and Aromaa et al. (2020) [133] have proposed to show technically related visual obstacles of the forestry machine (e.g., crane) transparently on a display.
The hidden areas (behind the crane) could be seen with the help of cameras. This increases safety for the surrounding area. In addition to classic screens, there is also the option of projecting information directly into the operator’s field of view via a heads-up display. For example, the operator can be provided directly with information about the bucking optimization of the harvester without having to look at a separate monitor [134]. Teleworking mainly leads to hazard minimization for the operator. The biggest challenge in teleworking is sufficient visibility, as the fields of view of the cameras attached to the machine are limited [135]. Furthermore, depth vision is lost due to the most commonly used 2D displays, which makes it difficult to position the crane exactly and grip logs.
For timber loading onto trucks, Hiab (HiVision [136], see Figure 8) and Palfinger (Virtual Drive [137]) each offer a control system for forestry cranes. It allows the operator to control the crane from the truck cab while monitoring the environment with the help of a camera and VR goggles. The advantages of the system are that the truck driver no longer has to leave the cab, thereby avoiding hazards, and the operator is ergonomically positioned.
Due to the high cost of machinery and the dangerous nature of the work, great importance must be placed on the training of forestry machine operators. Thus, in addition to supporting operational work, augmented reality also offers advantages in education and training. With realistic simulations of forestry machines [138] and forest [139], a student can be trained in a safe environment in all work processes that occur in real operations.

3. AI Branches of Future Interest: Frontier Research Areas

To achieve practical success in agriculture and forestry, we identify three important AI frontier research areas (see Figure 9: (1) intelligent sensor information fusion, (2) robotics and embodied intelligence, and (3) augmentation, explanation, and verification technologies for trusted decision support. Such developments need novel, agile, human-centered design approaches with three generations (HCD-3G):
  • Generation 1: Enabling an easily realizable application through immediate deployment of existing technology which can be solved at the “bachelor level”.
  • Generation 2: Medium-term modification of existing technology, which can be solved at “master level”.
  • Generation 3: Advanced adaptation and evolution going beyond state-of-the-art at “doctoral level and beyond”.
In the following chapter we describe some open challenges and future research issues from which we expect three forms of added value to arise: (1) contributions to the international research community, (2) theoretical contributions beyond the state-of-the-art, and (3) practically useful contributions that ultimately add value for users in agriculture and forestry.

3.1. Intelligent Sensor Information Fusion

Smart applications in agriculture are characterized by the gathering of different types of data with the use of adequate sensors: soil, plant, animal, and environment sensors are used together for farm monitoring and management purposes [140]. There are a plethora of possibilities with the use of sensors that monitor physical and chemical signals, such as temperature, moisture, pH, and pollutants in real-time. One definition of multi-sensor fusion can be found in [141]; the same phenomenon or entity that is described by an unknown random variable is observed by different types of sensors. Through comparison, aggregation, and combination of several types of sensor data that are error-prone in general, the uncertainty about the random variable is decreased. The sensors, particularly in agricultural and forestry applications, are exposed to hard environmental conditions that greatly vary, and therefore, cannot deliver error-free measurements as the ones in a protected laboratory environment. In every step of the pipeline, including sensor data gathering, transfer, and storage, different types of faults can occur [142,143]. Adequate quality management (QM) measures [144] need to be implemented—both human-supervised and automated—to ensure a reliable outlier analysis and fault detection (FD) [145] as early as possible.
Sensor fusion can help not only error detection but also help the discovery of new insights with a variety of algorithms that basically compare the information content of several types of sensor data [146]. Data are typically combined under particular constraints, and at the same time, several conditions of consistency between them can help artificial intelligence systems to automate data preprocessing to a certain extent. In the work of Lee et al. (2010) [147] both thermal and RGB images were fused on the basis of similarity measures. Beyond raw data fusion, features that characterize the input data can be extracted, fused, and processed by individual AI algorithms, parallel to the ones processing the raw data [148]. The insights from both systems can be then combined in a weighted or voting fashion, which generally provides a better resulting performance, since each part grasps different aspects of the data [149,150]. Clustering methods and hierarchical self-organizing maps (SOM) are also used to detect intercorrelations between the data to enable an even more effective fusion process [151]. In conjunction with novel wearable sensors [152], these can provide tremendous benefits for smart farming and smart forestry, providing important information to optimize plant growth, combat biotic and abiotic stressors, and increase crop yields.
Beyond sensors, there are also other sources where data can be gathered, such as robots, handheld devices, drones, airborne laser scan data, and weather satellite data; we see enormous added value for a wide range of applications. Machine learning is an excellent way to achieve sensor fusion effectively. A detailed review of state-of-the-art methods can be found in [153]. Least squares support vector machine (SVM) classifiers [34] (see also Section 1.3 about supervised learning) were some of the first that dealt with different types of agricultural data that might have different ranges and be sampled at completely different timepoints.
  • Open Challenge G1: “Garbage in-garbage out”—this motto of data scientists is there to always reminds us of the fundamental property of information entropy, as defined by Claude Shannon [154,155]. Since the information that is lost can never be fully and perfectly reconstructed—in the best case only approximated—one of the first and foremost goals is to make sure that the sensor information is gathered with as few problems as possible. The sensor’s data gathering capability in difficult environmental conditions, and the continuous, (near-)real-time operation thereof and for the servers, need to be ensured. The quality and the sensitivity of the sensor are defined by production but need to be verified in practice. Together with human experts, the gathered data (both sensors and satellite) need to be analyzed, and the individual characteristics thereof need to be specified.
  • Open Challenge G2: Human experts have the capability to recognize abnormal behavior in sensor data by adequate visualizations and statistics. Fault detection software, on the other hand, has to rely on anomalies in the data, which means that (1) the outliers need to be “rare” (relative) and (2) the values of different sensors need to be compared with each other. What the vast majority of the sensors will track, will be roughly decisive for describing the whole data gathering, preprocessing, transferring, and detecting a fault either in a few of the sensors, all of them, or even the whole data transfer process.
  • Open Challenge G3: A reliable real-time data fusion system that has the intelligence to know when it is advisable to fuse different sensor data. If data from the majority of different sensor sources are inconsistent, then the data fusion process should rather be discarded, since this is an indication of a fault in the data pipeline. Ensuring that the gathered data lie in an acceptable range, are consistent with each other, and obey roughly some expected physical rules (radiance, humidity with respect to temperature) can open the path for successful and insightful information fusion.

3.2. Robotics and Embodied Intelligence

Pieter Abbeel puts it in a nutshell in his presentations, e.g., at CVPR 2021: The hardware in robotics is there, it is capable and usable. The real challenge is that the robots lack intelligence-which is defined by the underlying software. This motivates why we do not focus on hardware aspects, but take existing hardware and work on “bringing the intelligence into it”.
Combining human capabilities with robotics and embodied intelligence is a promising approach, which is still largely unexplored and has enormous potential. The team-ups of robots and humans could lead to exceeding their individual performances by combining the robot’s motor, vision, and computation capabilities with the perceptiveness and deep understanding of humans. To achieve this goal, several current approaches have to be built upon and combined.
For one, it is essential to bring RL agents up to a certain base performance level before their human interaction should begin, in order to allow efficient use of humans’ time and to avoid frustrations. Agents can be trained with synthesized or already gathered data under simulated conditions. Of course, deviations from real conditions need to be expected, but nevertheless, the simulations can provide an upper limit in expectations and also help define some minimal requirements for the on-site operation.
To recognize if the agent has reached this acceptable base level, and since many RL problems are currently solved by deep RL techniques [37,156,157], performance metrics and explainable AI (XAI) techniques can be used [158]. These techniques can demonstrate the underlying reasons for the decision-making process of a deep RL agent and help humans understand which parts of the current state were most relevant for the action. Furthermore, they help humans to understand the agent’s overall strategy and answer “what if” questions [17]. Then, the robot can efficiently learn through interaction with a human either by human demonstrations or preference-based learning [159]. Ultimately, the ongoing team-up can benefit from active learning approaches were edge cases are caught and solved by the human—also with the use of XAI methods—helping the robot to learn from those unknowns and generalize better in the future.
In order to enable a successful and efficient interaction between humans and RL agents, models need to have sufficient cognitive capacities for judging and further improving their behavior. A good example for this requirement is preference-based learning [159].
In preference-based learning, a robot presents two options (such as movement policies) to a human operator, who selects a preference and with that facilitates the challenging reward-selection process. In order for this to be effective, it is beneficial if the robot presents two “meaningful”, coherent movement policies, instead of the jittery behavior seen in newly instantiated models [160]. This exemplifies a more general statement: it is easier to judge the consistency of an already trained model, because it has already developed past the initial noise of random initialization.
One approach is pre-training, where models learn to reconstruct images in mostly self-supervised fashion from masked inputs. A drawback on this approach is that the model inevitable learns on the bias and noise of the given samples, which hampers the goal of learning noise-invariant concepts [161]. Contrastive pre-training extends this approach by masking or modifying (cropped, rotated, colored etc.) the samples and then reconstructing them, adding noise in the training process and leading to a more robust generalization. A concrete example for this application is the combination of contrastive pre-training and data-augmented RL, as proposed by Srinivas et al. [162]. To ensure that agents can benefit from this pre-training, the exploration–exploitation dilemma has to be tackled. Especially when training in an unsupervised fashion, the agent can get stuck in the exploitation phase without having sufficiently explored the available states. Liu and Abbeel [163] proposed a state-entropy reward to ensure a better state-coverage with unsupervised pre-training.
A second approach is to transfer knowledge between networks, before fine-tuning them for a certain task. Xu et al. [164] showed this approach by quickly learning knowledge (offline) from task-specific teachers, before continuing with an online-learning approach to further improve results. This also exemplifies how a network is brought up to speed, before being further trained in a continuous, online fashion.
A central challenge in reinforcement learning is finding a good goal function, which can be extremely laborious and still fail to consistently capture the intended goal [160]. To work around the task of explicitly stating the goal function, demonstration learning leverages the demonstrations of an exemplary solution for a task by a human to teach the RL agent. The agent can then extract behavioral priorities (capabilities) by fitting generative models to a large offline dataset of demonstrations. This, however, often requires a large dataset of samples to achieve good results. This approach was improved upon by Lin et al. [165], who used sample-efficient demonstrations to help agents to better explore during training and derive the rewards from the given demonstration.
However, a drawback to demonstration-based generative models is that they inherit perturbations in the raw data, and therefore can gain unusable skills. To better match skill extraction to human intentions, Pieter Abbeel’s group recently introduced the so-called skill preferences (SkiP) approach, an algorithm that learns a model about human preferences. After extracting human-preferred skills, SkiP also uses human feedback to solve downstream tasks with reinforcement learning. This has recently been used to solve complex, multi-step manipulation tasks [166]; see Figure 10.
The general idea of “learning behavioral priors with human feedback” (skill extraction; see (a) in Figure 10) is to use human preferences in order to fit a weighted behavioral prior over an offline dataset of (potentially noisy) demonstrations. SkiP builds on prior work for behavioral extraction from offline data via an expected maximum likelihood latent variable, e.g., as done in the OPAL approach [167]. They considered a parametric generative model p a l p h a ( a c t i o n s e q t | s t ) over action sequences where:
a c t i o n s e q t = ( a t , , a t + H 1 ) ,
which represents a behavioral prior, which was trained to replicate the transition statistics in the offline dataset:
p α arg max α E τ D t = 0 log p α ( a t | s t ) .
A third approach we want to highlight is active learning, where the agent actively queries the user for unknown data points, and is often guided by heuristics to determine uncertainty for sample selection. This has proven to be an efficient method for training semi-supervised models that has far lower costs. Fang et al. [168] showed an example for such an active learning approach for deep learning where they frame the sample selection process as an RL problem and with that generate a transferable heuristic. Rudovic et al. [169] showed how the active learning approach can also be used for fine-tuning by quickly personalizing classifiers on multi-modal user data.
There are many farm management and information systems (FMIS) available on the market. What they virtually all have in common is not mapping an entire farm, although terms such as “digital shadow” or “digital twin” can increasingly be found in the literature (e.g., [170,171]). To date, only so-called isolated solutions can be assumed. Future challenge lies in the complete mapping of entire farm systems, including the merging of all data streams in order to obtain information for the optimization of individual processes and entire farm systems. In addition, it must be made possible for the farmer to incorporate their expert knowledge into these systems in order to make the individualization of a virtual farm operating system a reality, as is shown by Groeneveld et al. (2021) [172] (see Figure 11). One of the research questions is how to include the expert-in-the-loop in the most efficient way.
Throughout the last decades, a wide variety of autonomous agricultural vehicles have been developed, based on different platforms, control units, sensor sets, operational algorithms, etc. [173]. Throughout the careful observation of all of these autonomous agricultural robots, several conclusions which can be used as a guidance for the designers can be derived. First of all, the most practical transmission system for the wide variety of agricultural autonomous vehicles is automatic transmission. Automatic transmission allows us precise control of performed operations, e.g., harvesting and fertilizing [73]. In order to control processes performed by autonomous agricultural vehicles, each vehicle has to have access to the engine controlling unit. Furthermore, each autonomous vehicle must have a computer which receives and sends commands from/to the sensor set, and send commands to actuators. Navigation is mainly performed by GNSS-based systems. Last, but not the least, controlling algorithms should be as simple as possible [174].
We propose several research directions that are beneficial for successful human–machine interaction in the scope of human-centered AI in embodied intelligence. A fundamental aspect that has to be taken into account is that generating trust in the system is essential for a fruitful interaction between the end-user and the robot. This trust is built by making the end-user understand the perception and decision-making process of the robot agent, which necessitates a focus on explainability.
Current explainability research, however, focuses mostly on explanations for experts and system developers, less on the end user [175]. We suggest that further research is required into what human (and end-user)-centered AI systems could look like, focusing on aspects such as real-time computation, understandability, explanations for lay persons, and predictability of robot behavior in general. First-generation tasks can focus on evaluating how current xAI approaches work and generating requirements for explainability from an end-user perspective. Second-generation tasks could then focus on testing and evaluating different approaches, which provide better explainability within the given constraints. The high-level, third-generation approach is developing a coherent framework for requirements and solutions providing explainability for embodied intelligence, collecting and exposing the developed xAI solutions. This approach also encompasses guiding the development of approaches in the second generation, ensuring an even coverage of this multi-faceted explainability challenge.
Such a process eventually allows us to provide a comprehensive set of tools that developers of robotic systems could easily use for facilitating their human–robot interactions with strong explainability approaches.
  • Open Challenge G1: Before even starting on-site operations, several experiments with synthetic data under simulated conditions must be completed. The scope of the challenges needs to be documented, along with the basic elements of the RL problems that will be faced down the line. Basic thoughts about the state and action space of the problems at hand, the reward strategy, the obstacles, and the limitations need to be examined. The feasibility of the solution, its scope, computational resources, and time resources need to be defined. The first simple prototypes in the laboratory that use deep RL need to work efficiently.
  • Open Challenge G2: The next level must incorporate stronger modeling testing, where both the agent and the environment’s characteristics will be represented by state-of-the-art AI software. The robot’s behavior has to take into account that noisy data and edge cases are things that will be encountered. Gradually, it has to move on from the more idealized case of Challenge G1 to real-world data that contain faults, have drifts, and are representative of an environment that is much more complex. This is planned to be an incremental process that will step-wise make the agent capable of dealing with real, on-site situations. The decision-making process of the robot, while confronted with newer, more complex situations, must also be highlighted through XAI methods. By that means, human experts can control if the robot is following plausible principles or relies on Clever–Hans correlations [176].
  • Open Challenge G3: The milestone goal of embodied intelligence for agricultural applications will be for the robots being able to perform the required tasks in real conditions that are far more complex than the ones encountered in the simulations of Challenge G1. At this level, the robot is well capable of recognizing when it can act autonomously, providing an understandable explanation for its decisions to the human with the help of XAI, learning from human feedback, and also enhancing the human’s expert knowledge—since the robot might discover a new solution to the encountered problems, as in the game of Go [38].

3.3. Augmentation, Explanation, and Verification Technologies

Visualization methods and augmented reality (AR) are also becoming more sophisticated thanks to advances in AI. What is essential, however, is that these technologies are now very widespread and affordable, and are therefore used in a wide variety of application situations as tools for decision support [177]. Domain experts can be provided with contextual and relevant information in an unobtrusive way (AR glasses for forestry technology). In this context, a new technology trend is very important: situated visualization (SV), which can be seen as the presentation of information in its semantic context [178].
However, the presented data need to be filtered, to not overwhelm the user. In order to archive the best possible visualization for a user, Julier et al. (2000) [179] proposed an approach of information filtering whereby the information is prioritized according to the user’s needs. In addition, the user should be able to interact with the presented data. The full potential of this interaction can be exploited with the use of AR. For example, the data could be automatically filtered using the user’s context [180]. Moreover, the AR experience could be enhanced by drawing the user’s context from certain objects the user interacts with in the physical world.
AR technologies are very affordable and widespread, due to the fact that smartphones and tablets can be used as mobile displays. Head-mounted see-through displays, such as the Microsoft HoloLens and Google Glass, provide direct observation of the physical world while displaying virtual objects. They are easy to use, lightweight, and even allow the user to operate with both hands.
Google Glass is based on Android, and therefore allows the user to install applications from the Google Play Store [181]. The new Google Glass Enterprise Edition 2 allows the wearer to communicate through video with other people and let them experience the viewpoint of the wearer [182]. Furthermore, AR can be used as a tool for SV, whereby the information is visualized close to the location of the physical object [183].
Another effective way to approach information filtering is to use AI software for real-time anomaly and fault detection [145]. Farmers and foresters are mostly interested in being informed about abnormalities in their everyday working process so that they can intervene as soon as possible. This can be gradually done with the use of intelligent AI software that uses the knowledge of the domain expert and the available data to define the characteristics of normal operation vs. faulty case [184,185]. Information fusion is going to be incredibly beneficial, since a faulty state is usually characterized by anomalous behavior on several data sources almost at the same time. After a period where such a fault detection (FD) system has proven its reliability and correctness, concentrating on prioritizing data leading up to and indicating the fault will enormously help information filtering.
  • Open Challenge G1: Create the first visualizations for multi-modal data gathered from the sensors. Concentrate only on the adequate presentation of the data to domain experts and consider some usability aspects, without consideration or software development of data filtering in mind. Incorporate principles from graphical design and use all the facilities that one can ideally have, such as big monitors, Google Glass, and augmented reality systems (AR).
  • Open Challenge G2: Work in collaboration with human experts (forester and farmer) and see what are their principal needs, requirements, and expectations from an AI solution. Define with their help what their priorities are and what characteristics of the data differentiate between normal vs. abnormal behavior. Use the results from Challenge G1 to show them static visualizations from particular situations, and let them pinpoint, choose, and enhance those visualizations. Define with them use-cases that take into account which information is mist important for their decision-making process and how to prioritize and present it so that they are informed precisely, as fast as possible, but without being overwhelmed.
  • Open Challenge G3: Develop a real-time fault detection AI solution that encompasses all parts of the pipeline: (1) Data gathering, preprocessing, and fusion, (2) adequate visualization as implemented for Challenge G2 but now real-time, and (3) real-time fault detection with the use of efficient and explainable AI solutions. This pipeline is not static, since neither the human nor the AI software is perfect, nor can either prepare itself for every possible fault and condition that might occur. The components adapt to new anomalies, user requirements, domain-expert knowledge, and challenges that will arise from the use of more data, more sophisticated XAI methods, and quality management techniques.

4. Human-Centered AI and the Human in the Loop

A human-centered AI approach seeks to promote the robustness of AI algorithms by incorporating a human in the loop, and advocates a synergistic approach to allow humans to control AI and to align AI with human values, ethics, and legal requirements to ensure privacy, security, and safety [12].

4.1. Interactive Machine Learning with the “Human in the Loop”

The central challenges of real-world AI applications are in the uncertainty of the data—data can be missing, noisy, dirty, unwanted, etc.—and most of all, many problems in the real world are computationally hard, which makes the application of fully automated approaches difficult or even impossible, or at least the quality of results from automatic approaches might be questionable. Most of all, the complexity of sophisticated machine learning algorithms has detained non-experts from the application of such solutions.
The integration of the knowledge of a domain expert can sometimes be indispensable, and the interaction of a domain expert with the data would greatly enhance the whole machine learning process pipeline.
A human expert can (sometimes—not always, of course) bring experience, knowledge, and contextual understanding into the machine learning pipeline, which is invaluable to understanding and solving problems from our everyday world. This is what our best AI algorithms lack to date.
Hence, interactive machine learning (iML) puts the “human in the loop” to enable what neither a human nor a computer could do on their own. This idea is supported by a synergistic combination of methodologies of two areas that offer ideal conditions towards unraveling such problems, human–computer interaction (HCI) and knowledge discovery/data mining (KDD), with the goal of supporting human intelligence with machine intelligence to discover novel, previously unknown insights into data (HCI-KDD approach [186]).
The human-in-the-loop approach is defined as algorithms that can interact with both computational agents and human agents and can optimize their learning behavior through these interactions. [45].

4.2. Human-Centered Design

Human tasks change from time to time, and so do their requirements. In order to compromise on the vast demand for AI while providing users with the best possible experience, the design of AI technologies and interfaces is crucial. Holzinger et al. (2022) [187] showed the continuous changes in users’ profiles and requirements. The use of AI technologies illustrated that some users might want to try new technologies, whereas others might not. Several different reasons can influence the users’ trust, and therefore the possible usage of certain systems. Hence, it is necessary to get to know the potential users and keep their knowledge, experience, and goals in mind when designing systems. For example, a farmer will most certainly be interested in different data than a machine learning expert. They might use technology based on the same model, but they pursue different goals, and therefore desire different information or visualizations. Agile software development methods, particularly agile user centered design methods [188], are increasingly being used in industry, and are ideally suited for our outlined research and development approaches. However, these methods still lack usability awareness in their development lifecycles, and the integration of extreme usability methods into agile methods is necessary [189], which can help to fulfill the approaches of the “augmented farmer” or the “augmented forester”, which is similar to the “augmented pathologist” [190] or the “augmented radiologist” [191].

4.3. Farmer-in-the-Loop

The evaluation of the sustainability of a new technology is becoming of vital importance these days. A sustainability assessment (LCSA) includes a life cycle assessment (LCA) for estimating environmental impacts [192]; life cycle costing (LCC) for assessing economic issues [193]; and finally, a social life cycle assessment (SLCA) [194]. With this holistic assessment, processes can be evaluated over their entire life-spans (e.g., production of raw materials, manufacturing, use, end-of life treatment, and recycling and disposal of the product). In addition, based on the results, recommendations regarding support for sustainable development can be provided [195]. In the last few decades, assessments of systems have mostly focused on economic and environmental issues, as methods assessing social sustainability are still under development. Hence, the following paragraph will focus on environmental and economic issues regarding assessing sustainability of AI, specifically the implementation of sensors.
The use of sensors is a common practice in AI applications and robotics alike. Precision agriculture relies on sensors to gather the necessary data to respond to the temporal and spatial variability of crop production. Remote and proximal sensors are the two most common techniques to gather temporal and spatial crop information. Proximal sensor readings to determine crop input requirements started in the 90s [196]. In 1998, the first normalized difference vegetation index (NDV) sensor reading was used to determine the nitrogen requirements in a Bermuda grass field in Oklahoma [197]. Remote sensing, especially the use of satellites to determine spatial nutrients in soil and plants, started in the early 90s [198]. The use of sensors leads to varying the rates of crop input requirements, having positive environmental and economic impacts on crop production. Hotspots such as soil acidification, water eutrophication, and global warming potential (GWP) may be reduced. Profits can be also be achieved by using sensor-based technologies. Several studies have demonstrated the sustainability of using sensors for variable-rate-input applications in different crops.
Li et al. (2016) [199], for example, utilized a Crop Circle 210 crop reflectance sensor for variable rate nitrogen application (VRNA) in a corn field in Missouri. The amount of fertilizer used decreased 11% without affecting grain yield. The GWP, soil acidification, and freshwater eutrophication were reduced by 7, 10, 22, and 16%, respectively. In another study, Ehlert et at. (2004) [200] utilized a mechanical sensor for VRNA in a winter wheat field; the results showed a fertilizer reduction of 10–12% without compromising yield and grain quality. Some studies ([201,202]) have demonstrated that profits coming from the use of sensors for VRNA range from 10 €/ha to 25 €/ha, depending on the sensor and the size of the farm; more benefits come to farms bigger than 250 ha. The use of sensors for AI and machine learning applications can bring environmental and economic benefits, as illustrated by precision agriculture technologies. It will be relevant to have decision support tools such as LCA to assess the sustainability of such technologies in the future but at the same time make use of such technologies to collect the data needed to perform such assessments. In the long run, AI will most certainly lead to a transformation of entire business practices and industries toward a more sustainable path, by simultaneously fostering and facilitating environmental governance [203].
Despite the strong media interest in the digital transformation in agriculture, implementation in practice in the alpine region is not far advanced yet. Limiting factors are high investment requirements, a lack of integration into existing agricultural systems, and the lack of training of stakeholders in the sector [204,205]. The technologies available on the market allow precise intervention in agricultural processes, but usually use only single parameters in online transactional processes (OLTP), which are based on simple linear models [206]. However, agriculture is governed by naturally complex processes, which usually need to be represented using a large number of parameters with nonlinear relationships [207].
With the transition from precision agriculture, where only variability on land is considered, to smart farming, which emphasizes the use of complex structured and dynamic information and communication processes in farm management, new technologies, such as computer vision, big data, Internet of Things (IoT), cloud computing, robotics, and artificial intelligence (AI), are entering agriculture. Based on these technologies, digital transformation requires solutions according to online transactional processes (OLAP), which can make decisions based on a wide range of parameters [206,208,209]. Sensor development and information acquisition provide the essential foundation for this. Building on this, precision agriculture faces the challenge of requiring ever higher application precision. Technological developments in automation over the past few decades in agriculture have significantly increased the productivity of agricultural machinery by increasing efficiency, reliability, and precision; and reducing the cost of production and manual, strenuous field labor [173]. Robotics offers the opportunity to further increase both precision and efficiency [210].
Human–machine interaction always faces safety issues in the context of any type of autonomous navigation. The safety of humans, animals, and objects is a key requirement in the automation of agriculture and forestry. While for some industrial robots, it is sufficient to delineate the workspace, in the agricultural sector, a higher level of safety is required due to the more direct physical interaction [211].
Computer vision, i.e., imaging sensors and methods for analyzing images, is already being used in various areas of agriculture and food production [212,213]. On the one hand, this concerns applications in environment-dependent/dynamic navigation (including safety aspects), and on the other hand, information retrieval for process control and process evaluation.
Machine or deep learning algorithms are mainly used at first for classification, localization, and detection of different plants in the form of artificial neural networks [214,215]. This requires sufficiently annotated and high-quality training data, which can be immensely costly to produce and has been very limited in (free) availability in the field of precision agriculture [78,216]. Therefore, research is also being conducted on solutions using non-annotated data and semi-supervised learning [217]. In addition, hyperspectral imaging has gained importance in recent years. However, since cost-effective applications are not yet available, research continues on systems with only one measurement point for plant characterization [218]. All of those systems generate large amounts of data. In addition, data from sensors are becoming increasingly available in machines and devices in the context of stationary data (IoT), which could provide additional process/system parameters. Big data technologies, where large amounts of data with great variety are collected and analyzed, provide access to explicit information, and can therefore contribute greatly to decision-making processes through modeling and optimization. By combining big data with other external data sources, such as weather or market data, the benefits can be significantly increased. However, gaining knowledge from big data usually requires novel methods and special techniques that are often diverse and complex [219,220].
Wolfert et al. (2017) showed that information technologies in agricultural engineering are generally still at an early stage of development. The large amounts of data, however, not meeting i.i.d. requirements, pose a particular challenge [221]. It is expected that through access to real-time data and real-time forecasts, and through combination with IoT developments and especially AI, technologization will advance. While this prospect is promising, challenges such as data quality, privacy, and security issues cannot be neglected, arguing for a synergistic approach of human-centered AI to reconcile new technologies in agricultural technology with human values, ethics, and legal requirements to ensure privacy, security, and safety.
Future developments will increasingly provide data and information along the entire value chain in real time (challenge to the capacity of telecommunications-6G [222]) and with location accuracy (challenge to the precision of telemetry, [223]). In the context of sensor developments (increases in diversity, sensitivity, robustness, and precision, while decreasing prices), a basis will be created that allows agricultural processes to be recorded in a more differentiated way based on data. By using the experiential knowledge of humans via human–machine–human communication (human-in-the-loop, AI-in-the-loop), technical processes can be much more finely adapted to natural processes, which will contribute to increasing the resource efficiency of production processes (“feed the world”), and to improving the quality of products, both internally (human health) and externally (e.g., life cycle impact, animal welfare, environmental protection, and climate protection). In this context, networking all stakeholders through appropriate technology can increase the transparency of food production, improve documentation, and ultimately improve traceability—these are general goals of human-centered AI, and here we are again speaking of explainability. To achieve these benefits, AI tools that are appropriate, and above all, developed in a human-centered way (see Section 4.2), will play a crucial role in linking technologies and using domain knowledge, for natural agricultural systems, increasing their global and local diversity, and helping their sustainable management.
Therefore, data-driven AI applications can help to overcome the sectoral fragmentation of IT applications in agriculture and create links between agriculture, food processing, and consumers on the one hand, and the industry supplying agriculture on the other (e.g., predictive maintenance of machinery).
  • Open Challenge G1: Identify a requirements map and technology overview. Create a toolbox of existing technologies for inexperienced farmers with easy-to-use methods and cost-effective applications to create benefits in everyday life according to the concept of human-centered design (see Section 4.2).
  • Open Challenge G2: Making online available data accessible and integrate the structure and computational operations of the above toolset into AI solutions. Networking, fusion, integration, presentation, and visualization of information from different sources and locations, following the information visualization mantra “overview first, zoom and filter, then details on demand” [224] to provide a respective snapshot across the entire value chain, thereby identifying insight and opportunities for further analysis of key indicators across the entire value chain (“from seed to the consumer’s stomach”).
  • Open Challenge G3: A key challenge is to gradually refine the tools of LCA, LCC, and SLCA.

4.4. Forester-in-the-Loop

Forest education in Austria is based on the general Austrian education system and includes various forestry professions. The theoretical and practical education, imparted through school, university, an apprenticeship, or specialized courses, ensures the competence of the forestry personnel of tomorrow and forms the foundation of Austrian forestry. After a three-year apprenticeship, secondary courses, or an exam following agricultural and forestry colleges, forest workers are optimally trained for motor-manual forest activities in reforestation, cultivation, maintenance, and harvesting. Since 2016, it has been possible to learn the profession of forestry technician, where the main focus is on the handling of equipment and machinery for timber harvesting. This solid education in combination with the theoretically and practically acquired expertise is very well suited to be integrated into forestry AI processes. In this way, the expertise and practical experience of the forester/forestry worker/operator acquired in school can contribute to the training of AI systems, and conversely, the human can be supported and relieved of their decisions and activities.
In steep terrain, especially in Austrian forests, timber harvesting will always be a challenge in terms of economic viability, safety, and environmental performance. The state-of-the-art harvesting method in steep terrain is motor-manual felling in combination with logging via cable yarders. Even though new technologies and innovations are modernizing timber harvesting in steep terrain, there will always be operations where forest workers need to enter the forest due to the difficult topological conditions. In these cases, it is the task of human-centered AI to intelligently integrate workers on the site into already autonomous or semi-autonomous timber harvesting operations. For example, potentially autonomous cable yarders must interact appropriately with workers on the site, and both parties must be considerate of and learn from each other.
All forestry machines, whether harvesters, forwarders, cable yarders, or simply chainsaws, and whether autonomous, automated, remote, or traditionally controlled by a human in the cab, have certain maintenance and repair needs. Some repairs also occur randomly and suddenly, such as chain breakage on a harvester head or replacement of a hydraulic hose. These repairs are then usually carried out by the operator on site. In future AI-supported autonomous or automatic processes, humans must also be involved in this topic with their expertise in all aspects of the repair. Real-time anomaly and fault detection (see Section 3.1) is faster, more reliable, and needs less visualization capabilities [145]. This is going to be of great benefit to the user, since this will tackle efficiently the problem of filtering out information (see Section 3.3), while at the same time detecting the faults efficiently and needing the domain expert only for the definition of normal vs. abnormal behavior and potential repair on-site.
Information about AI in forest operations can also be found on websites and in popular science journals. However, science has basically been dealing with developments of robotics in forestry since the 1980s (as in [225]). Over the years and with increasing sensor technology and computer power, a number of forest operations research groups are experimenting with digitization in forest operation. Visser and Obi (2021) [88] stated that there is often much speculation on future benefits, and there is almost a complete absence of information on actual productivity improvements of any of the prototypes developed. For forestry practice, it can be stated that there is a lot of R&D in the field, but the developments have not actually found their way into the real forest yet. The next steps to successfully establish new technologies and AI in timber harvesting are to use existing technologies, which are already well equipped with sensors and computing power, to link them with AI and to integrate the forest worker as an integral control, steering, and assistance organ.
  • Open Challenge G1: Due to the small-scale forestry in Austria, many forest owners work in their own forests. Therefore, most of the work is done manually with a chainsaw. The future task of the forester-in-the-loop approach is to develop digital and smart tools in this area as well. For example, individual tree information could be shown to the forest owners via heads-up displays in their helmets or VR glasses. Furthermore, it would also be conceivable to provide assistance with regard to value-optimized bucking in order to increase efficiency and revenue. Conversely, the AI can always learn from the forestry worker’s expertise and concrete actions during the work process.
  • Open Challenge G2: In principle, a good first step into autonomous practice would be to decouple the acquisition of the environment data of a forest machine and its AI-controlled features in terms of time. For example, digital twins of the forest can be created with 3D scanners (often takes place as part of forest inventory anyway); and autonomous, automated, or even augmented processes can be integrated into the forest machine on the basis of these. This saves time-consuming on-the-fly environment mapping and navigation. Robots (e.g., [226]) offer a good opportunity to test such a process. During the entire process, the forester should be specifically involved and make decisions.
  • Open Challenge G3: Based on Open Challenge G2, a direct navigation system for forestry machines should be developed, enabling on-the-fly navigation. This concerns not only the pure driving with the machine but also the autonomous control of attachments and aggregates (e.g., crane and harvester head). This not only applies to simply driving the machine, but also to the autonomous control of attachments and aggregates (crane, harvester head, etc.). In this further development step, it is also necessary to integrate further sensors and to process their data intelligently (GNSS sensor, 3D laser scanners, 2D laser scanners, stereo cameras, hydraulic sensors). Furthermore, the methods and approaches should also be transferred from the small robot to a real forest machine. The expertise of the previous operators should be incorporated in order to avoid environmental impacts and to work in a value-optimized manner. This step towards a large forestry will be done in close cooperation with machine manufacturers. Furthermore, large-scale practical studies will be carried out in the forest in order to test and continuously develop the systems.

5. Conclusions

Advanced technologies—from sensors to sophisticated augmented reality visualization methods—are nowadays already so inexpensive that they are very widely used. As a result, they are already being used as decision support tools in a wide range of application situations. The hardware is available; what needs to be worked on now is bringing intelligence into the hardware. This is where human-centered AI comes in—namely, not only providing the domain experts with contextual and relevant information, but also involving them directly in the decision-making process. A human expert often has a lot of experiential knowledge, and it can be useful to combine this natural intelligence with artificial intelligence.
In our paper, we first justified why agriculture and forestry are among the most important application areas for all humankind. Then, we facilitated a common understanding by providing definitions of artificial intelligence and human-centered AI, and introduced the three main approaches to machine learning (supervised, unsupervised, reinforcement). This should help the non-expert reader to get started with this important and forward-looking topic. We then presented the current state-of-the-art in autonomous, automated, assisted, and augmented AI systems. The special feature here is that we gave one example from agricultural technology and one example from forestry technology, thereby helping to understand the connection between these two areas.
In this paper, we described three pioneer research areas that we identified as the most important and promising research areas for the next 7 years based on our experience: (1) intelligent sensor information fusion, (2) robotics and embodied intelligence, and (3) augmentation, explanation, and verification. Finally, we summarized again the core ideas of human-centered AI and gave two examples of farmer-in-the-loop and forester-in-the-Loop. We are convinced that the next 7 years will be internationally dominated by these topics, and that with their help, practical progress can be made in the entire process chains of future agriculture and forestry.

Author Contributions

Conceptualization, A.H.; Funding acquisition, A.H.; Investigation, A.H., A.S. and C.G.; Methodology, A.H. and C.G.; Project administration, C.G.; Writing—original draft, A.H.; Writing—review & editing, A.H., A.S., A.A., C.O.R., A.G., V.P., F.M.-J., T.K., C.G. and K.S. All authors have read and agreed to the published version of the manuscript.

Funding

Parts of this work have been funded by the Austrian Science Fund (FWF), Project: P-32554 “explainable Artificial Intelligence”.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interests. This work does not raise any ethical issues.

References

  1. Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; Domisch, S.; Felländer, A.; Langhans, S.D.; Tegmark, M.; Fuso Nerini, F. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat. Commun. 2020, 11, 233. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. United Nations. UN Sustainable Development Goals. 2020. Available online: https://sdgs.un.org/goals (accessed on 14 March 2022).
  3. Barbier, E.B.; Burgess, J.C. The Sustainable Development Goals and the systems approach to sustainability. Economics 2017, 11. [Google Scholar] [CrossRef] [Green Version]
  4. Regulation (EU) 2021/1119 of the European Parliament and of the Council of 30 June 2021. 2021. Available online: https://eur-lex.europa.eu/eli/reg/2021/1119/oj (accessed on 4 March 2022).
  5. Nature and Forest Strategy Factsheet. 2021. Available online: https://ec.europa.eu/commission/presscorner/detail/en/fs_21_3670 (accessed on 6 March 2022).
  6. Carbonell, I.M. The ethics of big data in big agriculture. Internet Policy Rev. 2016, 5, 1–13. [Google Scholar] [CrossRef]
  7. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biol. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  8. Bengio, Y.; Lecun, Y.; Hinton, G. Deep learning for AI. Commun. ACM 2021, 64, 58–65. [Google Scholar] [CrossRef]
  9. Holzinger, A.; Weippl, E.; Tjoa, A.M.; Kieseberg, P. Digital Transformation for Sustainable Development Goals (SDGs)—A Security, Safety and Privacy Perspective on AI. In Springer Lecture Notes in Computer Science, LNCS 12844; Springer: Cham, Switzerland, 2021; pp. 1–20. [Google Scholar] [CrossRef]
  10. Holzinger, A.; Kickmeier-Rust, M.; Müller, H. Kandinsky Patterns as IQ-Test for Machine Learning. In Lecture Notes in Computer Science LNCS 11713; Springer/Nature: Cham, Switzerland, 2019; pp. 1–14. [Google Scholar] [CrossRef]
  11. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2020. [Google Scholar]
  12. Holzinger, A.; Dehmer, M.; Emmert-Streib, F.; Cucchiara, R.; Augenstein, I.; Del Ser, J.; Samek, W.; Jurisica, I.; Díaz-Rodríguez, N. Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence. Inf. Fusion 2022, 79, 263–278. [Google Scholar] [CrossRef]
  13. Holzinger, A. The Next Frontier: AI We Can Really Trust. In Proceedings of the ECML PKDD 2021, CCIS 1524; Michael Kamp, E.A., Ed.; Springer: Cham, Switzerland, 2021; pp. 1–14. [Google Scholar] [CrossRef]
  14. Shneiderman, B. Human-Centered AI; Oxford University Press: Oxford, UK, 2022. [Google Scholar]
  15. Holzinger, A.; Plass, M.; Kickmeier-Rust, M.; Holzinger, K.; Crişan, G.C.; Pintea, C.M.; Palade, V. Interactive machine learning: Experimental evidence for the human in the algorithmic loop. Appl. Intell. 2019, 49, 2401–2414. [Google Scholar] [CrossRef] [Green Version]
  16. Dietterich, T.G.; Horvitz, E.J. Rise of concerns about AI: Reflections and directions. Commun. ACM 2015, 58, 38–40. [Google Scholar] [CrossRef]
  17. Holzinger, A.; Malle, B.; Saranti, A.; Pfeifer, B. Towards Multi-Modal Causability with Graph Neural Networks enabling Information Fusion for explainable AI. Inf. Fusion 2021, 71, 28–37. [Google Scholar] [CrossRef]
  18. Laplace, P.S. Mémoire sur les probabilités. Mémoires L’académie R. Des Sci. Paris 1781, 1778, 227–332. [Google Scholar]
  19. Bayes, T. An Essay towards solving a Problem in the Doctrine of Chances (communicated by Richard Price). Philos. Trans. 1763, 53, 370–418. [Google Scholar]
  20. Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature 2015, 521, 452–459. [Google Scholar] [CrossRef] [PubMed]
  21. Mandel, T.; Liu, Y.E.; Brunskill, E.; Popovic, Z. Where to add actions in human-in-the-loop reinforcement learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  22. Holzinger, A.; Plass, M.; Holzinger, K.; Crisan, G.C.; Pintea, C.M.; Palade, V. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv 2017, arXiv:1708.01104. [Google Scholar] [CrossRef]
  23. Lage, I.; Ross, A.; Gershman, S.J.; Kim, B.; Doshi-Velez, F. Human-in-the-loop interpretability prior. In Proceedings of the Advances in Neural Information Processing Systems NeurIPS 2018, Montreal, QC, Canada, 3–8 December 2018; pp. 10159–10168. [Google Scholar]
  24. Wasserman, L. All of Statistics: A Concise Course in Statistical Inference; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2004. [Google Scholar]
  25. Hensman, J.; Fusi, N.; Lawrence, N.D. Gaussian processes for big data. arXiv 2013, arXiv:1309.6835. [Google Scholar]
  26. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  27. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  28. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
  29. Holzinger, A. Introduction to Machine Learning and Knowledge Extraction (MAKE). Mach. Learn. Knowl. Extr. 2019, 1, 1–20. [Google Scholar] [CrossRef] [Green Version]
  30. Caruana, R.; Niculescu-Mizil, A. An empirical comparison of supervised learning algorithms. In Proceedings of the 23rd international conference on Machine learning (ICML 2006), Pittsburgh, PA, USA, 25–29 June 2006; Association for Computing Machinery: New York, NY, USA, 2006. [Google Scholar] [CrossRef] [Green Version]
  31. Ahmed, F.; Al-Mamun, H.A.; Bari, A.H.; Hossain, E.; Kwan, P. Classification of crops and weeds from digital images: A support vector machine approach. Crop. Prot. 2012, 40, 98–104. [Google Scholar] [CrossRef]
  32. Shorewala, S.; Ashfaque, A.; Sidharth, R.; Verma, U. Weed density and distribution estimation for precision agriculture using semi-supervised learning. IEEE Access 2021, 9, 27971–27986. [Google Scholar] [CrossRef]
  33. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1–9. [Google Scholar] [CrossRef]
  34. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
  35. Murphy, K.P. Probabilistic Machine Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2022. [Google Scholar]
  36. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  37. Graesser, L.; Keng, W.L. Foundations of Deep Reinforcement Learning: Theory and Practice in Python; Addison-Wesley Professional: Boston, MA, USA, 2019. [Google Scholar]
  38. Pumperla, M.; Ferguson, K. Deep Learning and the Game of Go; Manning Publications Company: Shelter Island, NY, USA, 2019; Volume 231. [Google Scholar]
  39. Hernandez-Leal, P.; Kartal, B.; Taylor, M.E. Is multiagent deep reinforcement learning the answer or the question? A brief survey. Learning 2018, 21, 22. [Google Scholar]
  40. Shahriari, B.; Swersky, K.; Wang, Z.; Adams, R.P.; de Freitas, N. Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE 2016, 104, 148–175. [Google Scholar] [CrossRef] [Green Version]
  41. Sonnenburg, S.; Rätsch, G.; Schaefer, C.; Schölkopf, B. Large scale multiple kernel learning. J. Mach. Learn. Res. 2006, 7, 1531–1565. [Google Scholar]
  42. Mueller, H.; Mayrhofer, M.T.; Veen, E.B.V.; Holzinger, A. The Ten Commandments of Ethical Medical AI. IEEE Comput. 2021, 54, 119–123. [Google Scholar] [CrossRef]
  43. Stoeger, K.; Schneeberger, D.; Holzinger, A. Medical Artificial Intelligence: The European Legal Perspective. Commun. ACM 2021, 64, 34–36. [Google Scholar] [CrossRef]
  44. Wilson, A.G.; Dann, C.; Lucas, C.; Xing, E.P. The Human Kernel. In Advances in Neural Information Processing Systems, NIPS 2015; Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R., Eds.; NIPS Foundation: La Jolla, CA, USA, 2015. [Google Scholar]
  45. Holzinger, A. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Inform. 2016, 3, 119–131. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Holzinger, A.; Plass, M.; Holzinger, K.; Crisan, G.C.; Pintea, C.M.; Palade, V. Towards interactive Machine Learning (iML): Applying Ant Colony Algorithms to solve the Traveling Salesman Problem with the Human-in-the-Loop approach. In Springer Lecture Notes in Computer Science LNCS 9817; Springer: Heidelberg/Berlin, Germany; New York, NY, USA, 2016; pp. 81–95. [Google Scholar] [CrossRef]
  47. Levinson, J.; Askeland, J.; Becker, J.; Dolson, J.; Held, D.; Kammel, S.; Kolter, J.Z.; Langer, D.; Pink, O.; Pratt, V. Towards fully autonomous driving: Systems and algorithms. In Proceedings of the Intelligent Vehicles Symposium (IV 2011), Baden-Baden, Germany, 5–9 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 163–168. [Google Scholar] [CrossRef]
  48. Floreano, D.; Wood, R.J. Science, technology and the future of small autonomous drones. Nature 2015, 521, 460–466. [Google Scholar] [CrossRef] [Green Version]
  49. Van der Aalst, W.M.; Bichler, M.; Heinzl, A. Robotic process automation. Bus. Inf. Syst. Eng. 2018, 60, 269–272. [Google Scholar] [CrossRef] [Green Version]
  50. Liu, T.; Sun, Y.; Wang, C.; Zhang, Y.; Qiu, Z.; Gong, W.; Lei, S.; Tong, X.; Duan, X. Unmanned aerial vehicle and artificial intelligence revolutionizing efficient and precision sustainable forest management. J. Clean. Prod. 2021, 311, 127546. [Google Scholar] [CrossRef]
  51. Singh, D.; Merdivan, E.; Hanke, S.; Kropf, J.; Geist, M.; Holzinger, A. Convolutional and Recurrent Neural Networks for Activity Recognition in Smart Environment. In Towards Integrative Machine Learning and Knowledge Extraction: BIRS Workshop, Banff, AB, Canada, July 24–26, 2015, Revised Selected Papers; Holzinger, A., Goebel, R., Ferri, M., Palade, V., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 194–205. [Google Scholar] [CrossRef]
  52. Nischelwitzer, A.; Lenz, F.J.; Searle, G.; Holzinger, A. Some Aspects of the Development of Low-Cost Augmented Reality Learning Environments as examples for Future Interfaces in Technology Enhanced Learning. In Universal Access to Applications and Services, Lecture Notes in Computer Science (LNCS 4556); Stephanidis, C., Ed.; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2007; pp. 728–737. [Google Scholar] [CrossRef]
  53. Silva, S.; Duarte, D.; Valente, A.; Soares, S.; Soares, J.; Pinto, F.C. Augmented Intelligent Distributed Sensing System Model for Precision Agriculture. In Proceedings of the 2021 Telecoms Conference (ConfTELE), Leiria, Portugal, 11–12 February 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–4. [Google Scholar] [CrossRef]
  54. Ramos, G.; Meek, C.; Simard, P.; Suh, J.; Ghorashi, S. Interactive machine teaching: A human-centered approach to building machine-learned models. Hum.–Comput. Interact. 2020, 35, 413–451. [Google Scholar] [CrossRef]
  55. Wang, D.; Cao, W.; Zhang, F.; Li, Z.; Xu, S.; Wu, X. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sens. 2022, 14, 559. [Google Scholar] [CrossRef]
  56. Zhang, J.; Huang, Y.; Pu, R.; Gonzalez-Moreno, P.; Yuan, L.; Wu, K.; Huang, W. Monitoring plant diseases and pests through remote sensing technology: A review. Comput. Electron. Agric. 2019, 165, 104943. [Google Scholar] [CrossRef]
  57. Sishodia, R.P.; Ray, R.L.; Singh, S.K. Applications of remote sensing in precision agriculture: A review. Remote Sens. 2020, 12, 3136. [Google Scholar] [CrossRef]
  58. Yao, H.; Qin, R.; Chen, X. Unmanned aerial vehicle for remote sensing applications—A review. Remote Sens. 2019, 11, 1443. [Google Scholar] [CrossRef] [Green Version]
  59. Weiss, M.; Jacob, F.; Duveiller, G. Remote sensing for agricultural applications: A meta-review. Remote Sens. Environ. 2020, 236, 111402. [Google Scholar] [CrossRef]
  60. Saiz-Rubio, V.; Rovira-Más, F. From smart farming towards agriculture 5.0: A review on crop data management. Agronomy 2020, 10, 207. [Google Scholar] [CrossRef] [Green Version]
  61. Mendes, J.; Pinho, T.M.; Neves dos Santos, F.; Sousa, J.J.; Peres, E.; Boaventura-Cunha, J.; Cunha, M.; Morais, R. Smartphone applications targeting precision agriculture practices—A systematic review. Agronomy 2020, 10, 855. [Google Scholar] [CrossRef]
  62. Sartori, D.; Brunelli, D. A smart sensor for precision agriculture powered by microbial fuel cells. In Proceedings of the 2016 IEEE Sensors Applications Symposium (SAS), Catania, Italy, 20–22 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–6. [Google Scholar] [CrossRef]
  63. Elmeseiry, N.; Alshaer, N.; Ismail, T. A Detailed Survey and Future Directions of Unmanned Aerial Vehicles (UAVs) with Potential Applications. Aerospace 2021, 8, 363. [Google Scholar] [CrossRef]
  64. Kalyani, Y.; Collier, R. A Systematic Survey on the Role of Cloud, Fog, and Edge Computing Combination in Smart Agriculture. Sensors 2021, 21, 5922. [Google Scholar] [CrossRef]
  65. Jarial, S. Internet of Things application in Indian agriculture, challenges and effect on the extension advisory services—A review. J. Agribus. Dev. Emerg. Econ. 2022, ahead-of-print. [CrossRef]
  66. Cockburn, M. Application and prospective discussion of machine learning for the management of dairy farms. Animals 2020, 10, 1690. [Google Scholar] [CrossRef] [PubMed]
  67. Haxhibeqiri, J.; De Poorter, E.; Moerman, I.; Hoebeke, J. A survey of LoRaWAN for IoT: From technology to application. Sensors 2018, 18, 3995. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Waldrop, M.M. Autonomous vehicles: No drivers required. Nat. News 2015, 518, 20–23. [Google Scholar] [CrossRef] [PubMed]
  69. Hopkins, D.; Schwanen, T. Talking about automated vehicles: What do levels of automation do? Technol. Soc. 2021, 64, 101488. [Google Scholar] [CrossRef]
  70. Monaco, T.; Grayson, A.; Sanders, D. Influence of four weed species on the growth, yield, and quality of direct-seeded tomatoes (Lycopersicon esculentum). Weed Sci. 1981, 29, 394–397. [Google Scholar] [CrossRef]
  71. Roberts, H.; Hewson, R.; Ricketts, M.E. Weed competition in drilled summer lettuce. Hortic. Res. 1977, 17, 39–45. [Google Scholar]
  72. Slaughter, D.C.; Giles, D.; Downey, D. Autonomous robotic weed control systems: A review. Comput. Electron. Agric. 2008, 61, 63–78. [Google Scholar] [CrossRef]
  73. Bechar, A.; Vigneault, C. Agricultural robots for field operations. Part 2: Operations and systems. Biosyst. Eng. 2017, 153, 110–128. [Google Scholar] [CrossRef]
  74. Sabatini, R.; Moore, T.; Ramasamy, S. Global navigation satellite systems performance analysis and augmentation strategies in aviation. Prog. Aerosp. Sci. 2017, 95, 45–98. [Google Scholar] [CrossRef]
  75. Lim, Y.; Pongsakornsathien, N.; Gardi, A.; Sabatini, R.; Kistan, T.; Ezer, N.; Bursch, D.J. Adaptive human–robot interactions for multiple unmanned aerial vehicles. Robotics 2021, 10, 12. [Google Scholar] [CrossRef]
  76. Ehsani, M.R.; Sullivan, M.D.; Zimmerman, T.L.; Stombaugh, T. Evaluating the dynamic accuracy of low-cost GPS receivers. In Proceedings of the 2003 ASAE Annual Meeting. American Society of Agricultural and Biological Engineers, Las Vegas, NV, USA, 27–30 July 2003; p. 1. [Google Scholar] [CrossRef]
  77. Åstrand, B.; Baerveldt, A.J. An agricultural mobile robot with vision-based perception for mechanical weed control. Auton. Robot. 2002, 13, 21–35. [Google Scholar] [CrossRef]
  78. Chebrolu, N.; Lottes, P.; Schaefer, A.; Winterhalter, W.; Burgard, W.; Stachniss, C. Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields. Int. J. Robot. Res. 2017, 36, 1045–1052. [Google Scholar] [CrossRef] [Green Version]
  79. Scholz, C.; Moeller, K.; Ruckelshausen, A.; Hinck, S.; Göttinger, M. Automatic soil penetrometer measurements and GIS based documentation with the autonomous field robot platform bonirob. In Proceedings of the 12th International Conference of Precision Agriculture, Sacramento, CA, USA, 20–23 July 2014. [Google Scholar]
  80. Lamm, R.D.; Slaughter, D.C.; Giles, D.K. Precision weed control system for cotton. Trans. ASAE 2002, 45, 231. [Google Scholar] [CrossRef]
  81. Blasco, J.; Aleixos, N.; Roger, J.; Rabatel, G.; Moltó, E. AE—Automation and emerging technologies: Robotic weed control using machine vision. Biosyst. Eng. 2002, 83, 149–157. [Google Scholar] [CrossRef]
  82. Bawden, O.; Kulk, J.; Russell, R.; McCool, C.; English, A.; Dayoub, F.; Lehnert, C.; Perez, T. Robot for weed species plant-specific management. J. Field Robot. 2017, 34, 1179–1199. [Google Scholar] [CrossRef]
  83. Ringdahl, O. Automation in Forestry: Development of Unmanned Forwarders. Ph.D. Thesis, Institutionen för Datavetenskap, Umeå Universitet, Umeå, Sweden, 2011. [Google Scholar]
  84. Parker, R.; Bayne, K.; Clinton, P.W. Robotics in forestry. N. Z. J. For. 2016, 60, 8–14. [Google Scholar]
  85. Rossmann, J.; Krahwinkler, P.; Schlette, C. Navigation of mobile robots in natural environments: Using sensor fusion in forestry. J. Syst. Cybern. Inform. 2010, 8, 67–71. [Google Scholar]
  86. Gollob, C.; Ritter, T.; Nothdurft, A. Forest inventory with long range and high-speed personal laser scanning (PLS) and simultaneous localization and mapping (SLAM) technology. Remote Sens. 2020, 12, 1509. [Google Scholar] [CrossRef]
  87. Visser, R. Next Generation Timber Harvesting Systems: Opportunities for Remote Controlled and Autonomous Machinery; Project No: PRC437-1718; Forest & Wood Products Australia Limited: Melbourne, Australia, 2018. [Google Scholar]
  88. Visser, R.; Obi, O.F. Automation and robotics in forest harvesting operations: Identifying near-term opportunities. Croat. J. For. Eng. J. Theory Appl. For. Eng. 2021, 42, 13–24. [Google Scholar] [CrossRef]
  89. Wells, L.A.; Chung, W. Evaluation of ground plane detection for estimating breast height in stereo images. For. Sci. 2020, 66, 612–622. [Google Scholar] [CrossRef]
  90. Thomasson, J.A.; Baillie, C.P.; Antille, D.L.; McCarthy, C.L.; Lobsey, C.R. A review of the state of the art in agricultural automation. Part II: On-farm agricultural communications and connectivity. In Proceedings of the 2018 ASABE Annual International Meeting. American Society of Agricultural and Biological Engineers, Detroit, Michigan, 29 July–1 August 2018; p. 1. [Google Scholar] [CrossRef]
  91. Vázquez-Arellano, M.; Griepentrog, H.W.; Reiser, D.; Paraforos, D.S. 3-D imaging systems for agricultural applications—A review. Sensors 2016, 16, 618. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  92. Schueller, J.K. Engineering advancements. In Automation: The Future of Weed Control in Cropping Systems; Springer: Berlin/Heidelberg, Germany, 2014; pp. 35–49. [Google Scholar] [CrossRef]
  93. Claas Crop Sensor. 2022. Available online: https://www.claas.co.uk/products/easy-2018/precision-farming/crop-sensor-isaria (accessed on 6 March 2022).
  94. Goense, D.; Hofstee, J.; Van Bergeijk, J. An information model to describe systems for spatially variable field operations. Comput. Electron. Agric. 1996, 14, 197–214. [Google Scholar] [CrossRef]
  95. Gollob, C.; Ritter, T.; Wassermann, C.; Nothdurft, A. Influence of scanner position and plot size on the accuracy of tree detection and diameter estimation using terrestrial laser scanning on forest inventory plots. Remote Sens. 2019, 11, 1602. [Google Scholar] [CrossRef] [Green Version]
  96. Bont, L.G.; Maurer, S.; Breschan, J.R. Automated cable road layout and harvesting planning for multiple objectives in steep terrain. Forests 2019, 10, 687. [Google Scholar] [CrossRef] [Green Version]
  97. Heinimann, H.R. Holzerntetechnik zur Sicherstellung einer minimalen Schutzwaldpflege: Bericht im Auftrag des Bundesamtes für Umwelt, Wald und Landschaft (BUWAL). Interner Ber./ETH For. Eng. 2003, 12. [Google Scholar]
  98. Dykstra, D.P.; Riggs, J.L. An application of facilities location theory to the design of forest harvesting areas. AIIE Trans. 1977, 9, 270–277. [Google Scholar] [CrossRef]
  99. Chung, W. Optimization of Cable Logging Layout Using a Heuristic Algorithm for Network Programming; Oregon State University: Corvallis, OR, USA, 2003. [Google Scholar]
  100. Epstein, R.; Weintraub, A.; Sapunar, P.; Nieto, E.; Sessions, J.B.; Sessions, J.; Bustamante, F.; Musante, H. A combinatorial heuristic approach for solving real-size machinery location and road design problems in forestry planning. Oper. Res. 2006, 54, 1017–1027. [Google Scholar] [CrossRef] [Green Version]
  101. Bont, L.; Heinimann, H.R.; Church, R.L. Optimizing cable harvesting layout when using variable-length cable roads in central Europe. Can. J. For. Res. 2014, 44, 949–960. [Google Scholar] [CrossRef]
  102. Pierzchała, M.; Kvaal, K.; Stampfer, K.; Talbot, B. Automatic recognition of work phases in cable yarding supported by sensor fusion. Int. J. For. Eng. 2018, 29, 12–20. [Google Scholar] [CrossRef]
  103. Abdullahi, H.S.; Mahieddine, F.; Sheriff, R.E. Technology impact on agricultural productivity: A review of precision agriculture using unmanned aerial vehicles. In International Conference on Wireless and Satellite Systems; Springer: Berlin/Heidelberg, Germany, 2015; pp. 388–400. [Google Scholar] [CrossRef]
  104. Chen, J.; Zhang, M.; Xu, B.; Sun, J.; Mujumdar, A.S. Artificial intelligence assisted technologies for controlling the drying of fruits and vegetables using physical fields: A review. Trends Food Sci. Technol. 2020, 105, 251–260. [Google Scholar] [CrossRef]
  105. Antille, D.L.; Lobsey, C.R.; McCarthy, C.L.; Thomasson, J.A.; Baillie, C.P. A review of the state of the art in agricultural automation. Part IV: Sensor-based nitrogen management technologies. In Proceedings of the 2018 ASABE Annual International Meeting. American Society of Agricultural and Biological Engineers, Detroit, Michigan, 29 July–1 August 2018; p. 1. [Google Scholar] [CrossRef]
  106. Hague, T.; Tillett, N. Navigation and control of an autonomous horticultural robot. Mechatronics 1996, 6, 165–180. [Google Scholar] [CrossRef]
  107. Howard, C.N.; Kocher, M.F.; Hoy, R.M.; Blankenship, E.E. Testing the fuel efficiency of tractors with continuously variable and standard geared transmissions. Trans. ASABE 2013, 56, 869–879. [Google Scholar] [CrossRef] [Green Version]
  108. Ovaskainen, H.; Uusitalo, J.; Väätäinen, K. Characteristics and significance of a harvester operators’ working technique in thinnings. Int. J. For. Eng. 2004, 15, 67–77. [Google Scholar] [CrossRef]
  109. Dreger, F.A.; Rinkenauer, G. Cut to Length Harvester Operator Skill: How Human Planning and Motor Control Co-Evolve to Allow Expert Performance. Fruehjahrskongress 2020, Berlin Digitaler Wandel, Digitale Arbeit, Digitaler Mensch? 2020. Available online: https://gfa2020.gesellschaft-fuer-arbeitswissenschaft.de/inhalt/D.1.3.pdf (accessed on 4 March 2022).
  110. Purfürst, F.T. Learning curves of harvester operators. Croat. J. For. Eng. J. Theory Appl. For. Eng. 2010, 31, 89–97. [Google Scholar]
  111. Intelligent Boom Control. 2022. Available online: https://www.deere.co.uk/en/forestry/ibc/ (accessed on 6 March 2022).
  112. Smart Crane. 2022. Available online: https://www.komatsuforest.com/explore/smart-crane-for-forwarders (accessed on 6 March 2022).
  113. Smart Control. 2022. Available online: https://www.palfingerepsilon.com/en/Epsolutions/Smart-Control (accessed on 4 March 2022).
  114. Manner, J.; Mörk, A.; Englund, M. Comparing forwarder boom-control systems based on an automatically recorded follow-up dataset. Silva. Fenn. 2019, 53, 10161. [Google Scholar] [CrossRef] [Green Version]
  115. Englund, M.; Mörk, A.; Andersson, H.; Manner, J. Delautomation av Skotarkran–Utveckling och Utvärdering i Simulator. [Semi-Automated Forwarder Crane–Development and Evaluation in a Simulator]. 2017. Available online: https://www.skogforsk.se/cd_20190114162732/contentassets/e7e1a93a4ebd41c386b85dc3f566e5e8/delautomatiserad-skotarkran-utveckling-och-utvardering-i-simulator-arbetsrapport-932-2017.pdf (accessed on 3 March 2022).
  116. IBC: Operator’s Instructions 1WJ1110G004202-, 1WJ1210G002102-, 1WJ1510G003604-. 2022. Available online: https://www.johndeeretechinfo.com/search?p0=doc_type&p0_v=operators%20manuals&pattr=p0 (accessed on 7 March 2022).
  117. Hurst, W.; Mendoza, F.R.; Tekinerdogan, B. Augmented Reality in Precision Farming: Concepts and Applications. Smart Cities 2021, 4, 1454–1468. [Google Scholar] [CrossRef]
  118. Burdea, G.C.; Coiffet, P. Virtual Reality Technology; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  119. Seth, A.; Vance, J.M.; Oliver, J.H. Virtual reality for assembly methods prototyping: A review. Virtual Real. 2011, 15, 5–20. [Google Scholar] [CrossRef] [Green Version]
  120. Schultheis, M.T.; Rizzo, A.A. The application of virtual reality technology in rehabilitation. Rehabil. Psychol. 2001, 46, 296. [Google Scholar] [CrossRef]
  121. Höllerer, T.; Feiner, S. Mobile augmented reality. In Telegeoinformatics: Location-Based Computing and Services; Routledge: London, UK, 2004; Volume 21. [Google Scholar]
  122. Lee, L.H.; Hui, P. Interaction methods for smart glasses: A survey. IEEE Access 2018, 6, 28712–28732. [Google Scholar] [CrossRef]
  123. Ponnusamy, V.; Natarajan, S.; Ramasamy, N.; Clement, C.; Rajalingam, P.; Mitsunori, M. An iot- enabled augmented reality framework for plant disease detection. Revue D’Intell. Artif. 2021, 35, 185–192. [Google Scholar] [CrossRef]
  124. Caria, M.; Sara, G.; Todde, G.; Polese, M.; Pazzona, A. Exploring Smart Glasses for Augmented Reality: A Valuable and Integrative Tool in Precision Livestock Farming. Animals 2019, 9, 903. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  125. Santana-Fernández, J.; Gómez-Gil, J.; del Pozo-San-Cirilo, L. Design and implementation of a GPS guidance system for agricultural tractors using augmented reality technology. Sensors 2010, 10, 10435–10447. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  126. De Castro Neto, M.; Cardoso, P. Augmented reality greenhouse. In Proceedings of the EFITA-WCCA-CIGR Conference “Sustainable Agriculture through ICT Innovation”, Turin, Italy, 24–27 June 2013; pp. 24–27. [Google Scholar]
  127. Vidal, N.R.; Vidal, R.A. Augmented reality systems for weed economic thresholds applications. Planta Daninha 2010, 28, 449–454. [Google Scholar] [CrossRef] [Green Version]
  128. Okayama, T.; Miyawaki, K. The “Smart Garden” system using augmented reality. IFAC Proc. Vol. 2013, 46, 307–310. [Google Scholar] [CrossRef]
  129. Sitompul, T.A.; Wallmyr, M. Using augmented reality to improve productivity and safety for heavy machinery operators: State of the art. In Proceedings of the 17th International Conference on Virtual-Reality Continuum and Its Applications in Industry, Brisbane, QLD, Australia, 14–16 November 2019; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
  130. Akyeampong, J.; Udoka, S.; Caruso, G.; Bordegoni, M. Evaluation of hydraulic excavator Human–Machine Interface concepts using NASA TLX. Int. J. Ind. Ergon. 2014, 44, 374–382. [Google Scholar] [CrossRef]
  131. Chen, Y.C.; Chi, H.L.; Kangm, S.C.; Hsieh, S.H. A smart crane operations assistance system using augmented reality technology. In Proceedings of the 28th International Symposium on Automation and Robotics in Construction, ISARC 2011, Seoul, Korea, 29 June–2 July 2011; pp. 643–649. [Google Scholar]
  132. Kymäläinen, T.; Suominen, O.; Aromaa, S.; Goriachev, V. Science fiction prototypes illustrating future see-through digital structures in mobile work machines. In EAI International Conference on Technology, Innovation, Entrepreneurship and Education; Springer: Berlin/Heidelberg, Germany, 2017; pp. 179–193. [Google Scholar]
  133. Aromaa, S.; Goriachev, V.; Kymäläinen, T. Virtual prototyping in the design of see-through features in mobile machinery. Virtual Real. 2020, 24, 23–37. [Google Scholar] [CrossRef] [Green Version]
  134. Englund, M.; Lundström, H.; Brunberg, T.; Löfgren, B. Utvärdering av Head-Up Display för Visning av Apteringsinformation i Slutavverkning; Technical Report; Skogforsk: Uppsala, Sweden, 2015. [Google Scholar]
  135. Fang, Y.; Cho, Y.K. Effectiveness analysis from a cognitive perspective for a real-time safety assistance system for mobile crane lifting operations. J. Constr. Eng. Manag. 2017, 143, 05016025. [Google Scholar] [CrossRef]
  136. HIAB HIVISION. 2022. Available online: https://www.hiab.com/en-us/digital-solutions/hivision (accessed on 6 March 2022).
  137. Virtual Drive. 2022. Available online: https://www.palfingerepsilon.com/en/Epsolutions/Virtual-Drive (accessed on 6 March 2022).
  138. Virtual Training for Ponsse. 2022. Available online: http://www.upknowledge.com/ponsse (accessed on 3 March 2022).
  139. Freund, E.; Krämer, M.; Rossmann, J. Towards realistic forest machine simulators. In Proceedings of the Modeling and Simulation Technologies Conference, Denver, CO, USA, 14–17 August 2000; p. 4095. [Google Scholar] [CrossRef]
  140. Pantazi, X.E.; Moshou, D.; Bochtis, D. Intelligent Data Mining and Fusion Systems in Agriculture; Academic Press: Cambridge, MA, USA, 2019. [Google Scholar]
  141. Boginski, V.L.; Commander, C.W.; Pardalos, P.M.; Ye, Y. Sensors: Theory, Algorithms, and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011; Volume 61. [Google Scholar]
  142. Al Hage, J.; El Najjar, M.E.; Pomorski, D. Multi-sensor fusion approach with fault detection and exclusion based on the Kullback–Leibler Divergence: Application on collaborative multi-robot system. Inf. Fusion 2017, 37, 61–76. [Google Scholar] [CrossRef]
  143. Ghosh, N.; Paul, R.; Maity, S.; Maity, K.; Saha, S. Fault Matters: Sensor data fusion for detection of faults using Dempster–Shafer theory of evidence in IoT-based applications. Expert Syst. Appl. 2020, 162, 113887. [Google Scholar] [CrossRef]
  144. Saranti, A.; Taraghi, B.; Ebner, M.; Holzinger, A. Property-based testing for parameter learning of probabilistic graphical models. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction; Springer: Berlin/Heidelberg, Germany, 2020; pp. 499–515. [Google Scholar] [CrossRef]
  145. Aggarwal, C.C. An introduction to outlier analysis. In Outlier Analysis; Springer: Berlin/Heidelberg, Germany, 2017; pp. 1–34. [Google Scholar]
  146. Moshou, D.; Bravo, C.; Oberti, R.; West, J.; Bodria, L.; McCartney, A.; Ramon, H. Plant disease detection based on data fusion of hyper-spectral and multi-spectral fluorescence imaging using Kohonen maps. Real-Time Imaging 2005, 11, 75–83. [Google Scholar] [CrossRef]
  147. Lee, W.S.; Alchanatis, V.; Yang, C.; Hirafuji, M.; Moshou, D.; Li, C. Sensing technologies for precision specialty crop production. Comput. Electron. Agric. 2010, 74, 2–33. [Google Scholar] [CrossRef]
  148. Moshou, D.; Kateris, D.; Gravalos, I.; Loutridis, S.; Sawalhi, N.; Gialamas, T.; Xyradakis, P.; Tsiropoulos, Z. Determination of fault topology in mechanical subsystems of agricultural machinery based on feature fusion and neural networks. In Proceedings of the Trends in Agricultural Engineering 2010, Prague, Czech Republic, 7–10 September 2010; p. 101. [Google Scholar]
  149. Géron, A. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems; O’Reilly Media, Inc.: Newton, MA, USA, 2019. [Google Scholar]
  150. Zheng, A.; Casari, A. Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists; O’Reilly Media, Inc.: Newton, MA, USA, 2018. [Google Scholar]
  151. Pantazi, X.E.; Moshou, D.; Mouazen, A.M.; Alexandridis, T.; Kuang, B. Data Fusion of Proximal Soil Sensing and Remote Crop Sensing for the Delineation of Management Zones in Arable Crop Precision Farming. In Proceedings of the HAICTA, Kavala, Greece, 17–20 September 2015; pp. 765–776. [Google Scholar]
  152. Yin, H.; Cao, Y.; Marelli, B.; Zeng, X.; Mason, A.J.; Cao, C. Soil Sensors and Plant Wearables for Smart and Precision Agriculture. Adv. Mater. 2021, 33, 2007764. [Google Scholar] [CrossRef] [PubMed]
  153. Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine learning in agriculture: A review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  154. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  155. MacKay, D.J.; Mac Kay, D.J. Information Theory, Inference and Learning Algorithms; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  156. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing atari with deep reinforcement learning. arXiv 2013, arXiv:1312.5602. [Google Scholar]
  157. Lapan, M. Deep Reinforcement Learning Hands-On: Apply Modern RL Methods, with Deep Q-Networks, Value Iteration, Policy Gradients, TRPO, AlphaGo Zero and More; Packt Publishing Ltd.: Birmingham, UK, 2018. [Google Scholar]
  158. Montavon, G.; Binder, A.; Lapuschkin, S.; Samek, W.; Müller, K.R. Layer-Wise Relevance Propagation: An Overview. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Springer: Berlin/Heidelberg, Germany, 2019; pp. 193–209. [Google Scholar]
  159. Wirth, C.; Akrour, R.; Neumann, G.; Fürnkranz, J. A survey of preference-based reinforcement learning methods. J. Mach. Learn. Res. 2017, 18, 1–46. [Google Scholar]
  160. Lee, K.; Smith, L.M.; Abbeel, P. PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training. arXiv 2021, arXiv:2106.05091. [Google Scholar]
  161. Luo, F.; Yang, P.; Li, S.; Ren, X.; Sun, X. CAPT: Contrastive pre-training for learning denoised sequence representations. arXiv 2020, arXiv:2010.06351. [Google Scholar] [CrossRef]
  162. Srinivas, A.; Laskin, M.; Abbeel, P. Curl: Contrastive unsupervised representations for reinforcement learning. arXiv 2020, arXiv:2004.04136. [Google Scholar] [CrossRef]
  163. Liu, H.; Abbeel, P. Unsupervised Active Pre-Training for Reinforcement Learning. ICLR 2021 2020, unpublished paper.
  164. Xu, Z.; Wu, K.; Che, Z.; Tang, J.; Ye, J. Knowledge Transfer in Multi-Task Deep Reinforcement Learning for Continuous Control. arXiv 2020, arXiv:2010.07494. [Google Scholar]
  165. Lin, K.; Gong, L.; Li, X.; Sun, T.; Chen, B.; Liu, C.; Zhang, Z.; Pu, J.; Zhang, J. Exploration-efficient Deep Reinforcement Learning with Demonstration Guidance for Robot Control. arXiv 2020, arXiv:2002.12089. [Google Scholar] [CrossRef]
  166. Wang, X.; Lee, K.; Hakhamaneshi, K.; Abbeel, P.; Laskin, M. Skill preferences: Learning to extract and execute robotic skills from human feedback. In Proceedings of the 5th Conference on Robot Learning (CoRL 2021), London, UK, 8 November 2021; PMLR: New York, NY, USA, 2022. [Google Scholar]
  167. Ajay, A.; Kumar, A.; Agrawal, P.; Levine, S.; Nachum, O. Opal: Offline primitive discovery for accelerating offline reinforcement learning. arXiv 2020, arXiv:2010.13611. [Google Scholar] [CrossRef]
  168. Fang, M.; Li, Y.; Cohn, T. Learning how to Active Learn: A Deep Reinforcement Learning Approach. arXiv 2017, arXiv:1708.02383. [Google Scholar]
  169. Rudovic, O.; Zhang, M.; Schuller, B.; Picard, R. Multi-modal active learning from human data: A deep reinforcement learning approach. In Proceedings of the 2019 International Conference on Multimodal Interaction, Suzhou, China, 14–18 October 2019; pp. 6–15. [Google Scholar] [CrossRef] [Green Version]
  170. Kassahun, A.; Bloo, R.; Catal, C.; Mishra, A. Dairy Farm Management Information Systems. Electronics 2022, 11, 239. [Google Scholar] [CrossRef]
  171. Wang, H.; Ren, Y.; Meng, Z. A Farm Management Information System for Semi-Supervised Path Planning and Autonomous Vehicle Control. Sustainability 2021, 13, 7497. [Google Scholar] [CrossRef]
  172. Groeneveld, D.; Tekinerdogan, B.; Garousi, V.; Catal, C. A domain-specific language framework for farm management information systems in precision agriculture. Precis. Agric. 2021, 22, 1067–1106. [Google Scholar] [CrossRef]
  173. Bechar, A.; Vigneault, C. Agricultural robots for field operations: Concepts and components. Biosyst. Eng. 2016, 149, 94–111. [Google Scholar] [CrossRef]
  174. Roshanianfard, A.; Noguchi, N.; Okamoto, H.; Ishii, K. A review of autonomous agricultural vehicles (The experience of Hokkaido University). J. Terramechanics 2020, 91, 155–183. [Google Scholar] [CrossRef]
  175. Bhatt, U.; Xiang, A.; Sharma, S.; Weller, A.; Taly, A.; Jia, Y.; Ghosh, J.; Puri, R.; Moura, J.M.F.; Eckersley, P. Explainable Machine Learning in Deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, Barcelona, Spain, 27–30 January 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 648–657. [Google Scholar] [CrossRef]
  176. Lapuschkin, S.; Wäldchen, S.; Binder, A.; Montavon, G.; Samek, W.; Müller, K.R. Unmasking Clever Hans predictors and assessing what machines really learn. Nat. Commun. 2019, 10, 1–8. [Google Scholar] [CrossRef] [Green Version]
  177. Martins, N.C.; Marques, B.; Alves, J.; Araújo, T.; Dias, P.; Santos, B.S. Augmented reality situated visualization in decision-making. In Multimedia Tools and Applications; Spinger: Berlin/Heidelberg, Germany, 2021; pp. 1–24. [Google Scholar] [CrossRef]
  178. Kim, K.; Billinghurst, M.; Bruder, G.; Duh, H.B.L.; Welch, G.F. Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008–2017). IEEE Trans. Vis. Comput. Graph. 2018, 24, 2947–2962. [Google Scholar] [CrossRef] [PubMed]
  179. Julier, S.; Lanzagorta, M.; Baillot, Y.; Rosenblum, L.; Feiner, S.; Hollerer, T.; Sestito, S. Information filtering for mobile augmented reality. In Proceedings of the IEEE and ACM International Symposium on Augmented Reality (ISAR 2000), Munich, Germany, 5–6 October 2000; pp. 3–11. [Google Scholar] [CrossRef] [Green Version]
  180. Xi, M.; Adcock, M.; McCulloch, J. Future Agriculture Farm Management using Augmented Reality. In Proceedings of the 2018 IEEE Workshop on Augmented and Virtual Realities for Good (VAR4Good), Reutlingen, Germany, 18 March 2018; pp. 1–3. [Google Scholar] [CrossRef]
  181. Rauschnabel, P.A.; Brem, A.; Ivens, B.S. Who will buy smart glasses? Empirical results of two pre-market-entry studies on the role of personality in individual awareness and intended adoption of Google Glass wearables. Comput. Hum. Behav. 2015, 49, 635–647. [Google Scholar] [CrossRef]
  182. Google. Google Meet on Glass. Available online: https://www.google.com/glass/meet-on-glass/ (accessed on 6 March 2022).
  183. Marques, B.; Santos, B.S.; Araújo, T.; Martins, N.C.; Alves, J.B.; Dias, P. Situated Visualization in The Decision Process Through Augmented Reality. In Proceedings of the 2019 23rd International Conference Information Visualisation (IV), Paris, France, 2–5 July 2019; pp. 13–18. [Google Scholar] [CrossRef]
  184. Ludeña-Choez, J.; Choquehuanca-Zevallos, J.J.; Mayhua-López, E. Sensor nodes fault detection for agricultural wireless sensor networks based on NMF. Comput. Electron. Agric. 2019, 161, 214–224. [Google Scholar] [CrossRef]
  185. Cecchini, M.; Piccioni, F.; Ferri, S.; Coltrinari, G.; Bianchini, L.; Colantoni, A. Preliminary investigation on systems for the preventive diagnosis of faults on agricultural operating machines. Sensors 2021, 21, 1547. [Google Scholar] [CrossRef]
  186. Holzinger, A. Human–Computer Interaction and Knowledge Discovery (HCI-KDD): What is the benefit of bringing those two fields to work together? In Multidisciplinary Research and Practice for Information Systems; Springer Lecture Notes in Computer Science LNCS 8127; Cuzzocrea, A., Kittl, C., Simos, D.E., Weippl, E., Xu, L., Eds.; Springer: Heidelberg/Berlin, Germany; New York, NY, USA, 2013; pp. 319–328. [Google Scholar] [CrossRef] [Green Version]
  187. Holzinger, A.; Kargl, M.; Kipperer, B.; Regitnig, P.; Plass, M.; Müller, H. Personas for Artificial Intelligence (AI) An Open Source Toolbox. IEEE Access 2022, 10, 23732–23747. [Google Scholar] [CrossRef]
  188. Hussain, Z.; Slany, W.; Holzinger, A. Current State of Agile User-Centered Design: A Survey. In HCI and Usability for e-Inclusion, USAB 2009; Lecture Notes in Computer Science, LNCS 5889; Springer: Berlin/Heidelberg, Germany, 2009; pp. 416–427. [Google Scholar] [CrossRef]
  189. Hussain, Z.; Slany, W.; Holzinger, A. Investigating Agile User-Centered Design in Practice: A Grounded Theory Perspective. In HCI and Usability for e-Inclusion; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5889, pp. 279–289. [Google Scholar] [CrossRef]
  190. Holzinger, A.; Malle, B.; Kieseberg, P.; Roth, P.M.; Müller, H.; Reihs, R.; Zatloukal, K. Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology. arXiv 2017, arXiv:1712.06657. [Google Scholar]
  191. Sorantin, E.; Grasser, M.G.; Hemmelmayr, A.; Tschauner, S.; Hrzic, F.; Weiss, V.; Lacekova, J.; Holzinger, A. The augmented radiologist: Artificial intelligence in the practice of radiology. Pediatr. Radiol. 2021, 51, 1–13. [Google Scholar] [CrossRef]
  192. ISO 14044:2006; Environmental Management—Life Cycle Assessment—Requirements and Guidelines. ISO: Geneve, Switzerland, 2006; pp. 1–46.
  193. Swarr, T.E.; Hunkeler, D.; Klöpffer, W.; Pesonen, H.L.; Ciroth, A.; Brent, A.C.; Pagan, R. Environmental Life-Cycle Costing: A Code of Practice; Spinger: Berlin/Heidelberg, Germany, 2011. [Google Scholar] [CrossRef]
  194. Ciroth, A.; Finkbeiner, M.; Traverso, M.; Hildenbrand, J.; Kloepffer, W.; Mazijn, B.; Prakash, S.; Sonnemann, G.; Valdivia, S.; Ugaya, C.M.L.; et al. Towards a life cycle sustainability assessment: Making informed choices on products. In Report of the UNEP/SETAC Life Cycle Initiative Programme; UNEP: Nairobi, Kenya, 2011. [Google Scholar]
  195. Lehmann, A.; Zschieschang, E.; Traverso, M.; Finkbeiner, M.; Schebek, L. Social aspects for sustainability assessment of technologies—Challenges for social life cycle assessment (SLCA). Int. J. Life Cycle Assess. 2013, 18, 1581–1592. [Google Scholar] [CrossRef]
  196. Zhang, Q. Control of Precision Agriculture Production. In Precision Agriculture Technology for Crop Farming; Washington State University Prosser: Prosser, WA, USA, 2015; pp. 103–132. [Google Scholar]
  197. Taylor, S.L.; Raun, W.R.; Solie, J.B.; Johnson, G.V.; Stone, M.L.; Whitney, R.W. Use of spectral radiance for correcting nitrogen deficiencies and estimating soil test variability in an established bermudagrass pasture. J. Plant Nutr. 1998, 21, 2287–2302. [Google Scholar] [CrossRef]
  198. Carr, P.; Carlson, G.; Jacobsen, J.; Nielsen, G.; Skogley, E. Farming soils, not fields: A strategy for increasing fertilizer profitability. J. Prod. Agric. 1991, 4, 57–61. [Google Scholar] [CrossRef]
  199. Li, A.; Duval, B.D.; Anex, R.; Scharf, P.; Ashtekar, J.M.; Owens, P.R.; Ellis, C. A case study of environmental benefits of sensor-based nitrogen application in corn. J. Environ. Qual. 2016, 45, 675–683. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  200. Ehlert, D.; Schmerler, J.; Voelker, U. Variable rate nitrogen fertilisation of winter wheat based on a crop density sensor. Precis. Agric. 2004, 5, 263–273. [Google Scholar] [CrossRef]
  201. Meyer-Aurich, A.; Gandorfer, M.; Heißenhuber, A. Economic analysis of precision farming technologies at the farm level: Two german case studies. In Agricultural Systems: Economics, Technology, and Diversity; Nova Science Publishers, Hauppage: Hauppauge, NY, USA, 2008; pp. 67–76. [Google Scholar]
  202. Meyer-Aurich, A.; Griffin, T.W.; Herbst, R.; Giebel, A.; Muhammad, N. Spatial econometric analysis of a field-scale site-specific nitrogen fertilizer experiment on wheat (Triticum aestuvum L.) yield and quality. Comput. Electron. Agric. 2010, 74, 73–79. [Google Scholar] [CrossRef]
  203. Nishant, R.; Kennedy, M.; Corbett, J. Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. Int. J. Inf. Manag. 2020, 53, 102104. [Google Scholar] [CrossRef]
  204. Gandorfer, M.; Schleicher, S.; Heuser, S.; Pfeiffer, J.; Demmel, M. Landwirtschaft 4.0–Digitalisierung und ihre Herausforderungen. Ackerbau-Tech. Lösungen für die Zuk. 2017, 9, 9–19. [Google Scholar]
  205. Ciruela-Lorenzo, A.M.; Del-Aguila-Obra, A.R.; Padilla-Meléndez, A.; Plaza-Angulo, J.J. Digitalization of agri-cooperatives in the smart agriculture context. proposal of a digital diagnosis tool. Sustainability 2020, 12, 1325. [Google Scholar] [CrossRef] [Green Version]
  206. Mazzetto, F.; Gallo, R.; Riedl, M.; Sacco, P. Proposal of an ontological approach to design and analyse farm information systems to support Precision Agriculture techniques. In IOP Conference Series: Earth and Environmental Science; IOP Publishing: Bristol, UK, 2019; Volume 275, p. 012008. [Google Scholar] [CrossRef]
  207. Jin, X.B.; Yang, N.X.; Wang, X.Y.; Bai, Y.T.; Su, T.L.; Kong, J.L. Hybrid Deep Learning Predictor for Smart Agriculture Sensing Based on Empirical Mode Decomposition and Gated Recurrent Unit Group Model. Sensors 2020, 20, 1334. [Google Scholar] [CrossRef] [Green Version]
  208. Sundmaeker, H.; Verdouw, C.; Wolfert, S.; Freire, L.P.; Vermesan, O.; Friess, P. Internet of food and farm 2020. In Digitising the Industry-Internet of Things Connecting Physical, Digital and Virtual Worlds; River Publishers: Delft, The Netherlands, 2016; Volume 129, p. 4. [Google Scholar]
  209. Wolfert, S.; Ge, L.; Verdouw, C.; Bogaardt, M.J. Big data in smart farming—A review. Agric. Syst. 2017, 153, 69–80. [Google Scholar] [CrossRef]
  210. Steward, B.L.; Gai, J.; Tang, L. The use of agricultural robots in weed management and control. Robot. Autom. Improv. Agric. 2019, 44, 1–25. [Google Scholar] [CrossRef] [Green Version]
  211. Mahtani, A.; Sanchez, L.; Fernández, E.; Martinez, A. Effective Robotics Programming with ROS; Packt Publishing Ltd.: Birmingham, UK, 2016. [Google Scholar]
  212. Li, L.; Zhang, Q.; Huang, D. A review of imaging techniques for plant phenotyping. Sensors 2014, 14, 20078–20111. [Google Scholar] [CrossRef]
  213. Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and systems for fruit detection and localization: A review. Comput. Electron. Agric. 2015, 116, 8–19. [Google Scholar] [CrossRef]
  214. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  215. Milioto, A.; Lottes, P.; Stachniss, C. Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2229–2235. [Google Scholar] [CrossRef] [Green Version]
  216. Zheng, Y.Y.; Kong, J.L.; Jin, X.B.; Wang, X.Y.; Su, T.L.; Zuo, M. CropDeep: The crop vision dataset for deep-learning-based classification and detection in precision agriculture. Sensors 2019, 19, 1058. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  217. Dos Santos Ferreira, A.; Freitas, D.M.; da Silva, G.G.; Pistori, H.; Folhes, M.T. Unsupervised deep learning and semi-automatic data labeling in weed discrimination. Comput. Electron. Agric. 2019, 165, 104963. [Google Scholar] [CrossRef]
  218. Ribes, M.; Russias, G.; Tregoat, D.; Fournier, A. Towards Low-Cost Hyperspectral Single-Pixel Imaging for Plant Phenotyping. Sensors 2020, 20, 1132. [Google Scholar] [CrossRef] [Green Version]
  219. Hashem, I.A.T.; Yaqoob, I.; Anuar, N.B.; Mokhtar, S.; Gani, A.; Khan, S.U. The rise of “big data” on cloud computing: Review and open research issues. Inf. Syst. 2015, 47, 98–115. [Google Scholar] [CrossRef]
  220. De Mauro, A.; Greco, M.; Grimaldi, M. A formal definition of Big Data based on its essential features. Libr. Rev. 2016, 65, 122–135. [Google Scholar] [CrossRef]
  221. Kshetri, N. The emerging role of Big Data in key development issues: Opportunities, challenges, and concerns. Big Data Soc. 2014, 1, 2053951714564227. [Google Scholar] [CrossRef] [Green Version]
  222. Tomkos, I.; Klonidis, D.; Pikasis, E.; Theodoridis, S. Toward the 6G network era: Opportunities and challenges. IT Prof. 2020, 22, 34–38. [Google Scholar] [CrossRef]
  223. Medici, M.; Mattetti, M.; Canavari, M. A conceptual framework for telemetry data use in agriculture. In Precision Agriculture ’21; Stafford, J.V., Ed.; Wageningen Academic Publishers: Wageningen, The Netherlands, 2021; pp. 935–940. [Google Scholar] [CrossRef]
  224. Shneiderman, B. The eyes have it: A task by data type taxonomy for information visualizations. In The Craft of Information Visualization; Elsevier: Amsterdam, The Netherlands, 2003; pp. 364–371. [Google Scholar]
  225. Courteau, J. Robotics in Forest Harvesting Machines; Paper-American Society of Agricultural Engineers (USA): St. Joseph Charter Township, MI, USA, 1989. [Google Scholar]
  226. Spot. 2022. Available online: https://www.bostondynamics.com/products/spot (accessed on 6 March 2022).
Figure 1. The United Nation’s seventeen Sustainable Development Goals (SDGs) [2].
Figure 1. The United Nation’s seventeen Sustainable Development Goals (SDGs) [2].
Sensors 22 03043 g001
Figure 2. Field robot BoniRob [78].
Figure 2. Field robot BoniRob [78].
Sensors 22 03043 g002
Figure 3. Tree to tree robot (image taken by SCION NZ Forest Research Institute, used with permission from Richard Parker).
Figure 3. Tree to tree robot (image taken by SCION NZ Forest Research Institute, used with permission from Richard Parker).
Sensors 22 03043 g003
Figure 4. Principles of the IBC system [116].
Figure 4. Principles of the IBC system [116].
Sensors 22 03043 g004
Figure 5. Types of AR deployment within crop and livestock management. Please refer to the excellent overview by Hurst et al. (2021) [117].
Figure 5. Types of AR deployment within crop and livestock management. Please refer to the excellent overview by Hurst et al. (2021) [117].
Sensors 22 03043 g005
Figure 6. Analysis of technologies coupled with AR in Farming. Dark blue refers to crop-bases articles, whereas light green is for livestock, adapted. For details, please refer to the original paper by Hurst et al. (2021) [117].
Figure 6. Analysis of technologies coupled with AR in Farming. Dark blue refers to crop-bases articles, whereas light green is for livestock, adapted. For details, please refer to the original paper by Hurst et al. (2021) [117].
Sensors 22 03043 g006
Figure 7. AR-based positioning assist system: (a) tractor mounted, (b) manual mounted [125].
Figure 7. AR-based positioning assist system: (a) tractor mounted, (b) manual mounted [125].
Sensors 22 03043 g007
Figure 8. Hiab HiVision VR system [136].
Figure 8. Hiab HiVision VR system [136].
Sensors 22 03043 g008
Figure 9. 3 × 3: The three frontier research areas with agile human-centered design in three generations. G1: testing existing technology, G2: adapting existing technology, G3: advanced adaptation going beyond state-of-the-art.
Figure 9. 3 × 3: The three frontier research areas with agile human-centered design in three generations. G1: testing existing technology, G2: adapting existing technology, G3: advanced adaptation going beyond state-of-the-art.
Sensors 22 03043 g009
Figure 10. Skill preferences (SkiP) [166] consists of two phases. During the skill extraction phase, human feedback is used to learn skills. During the skill execution phase, human feedback is used to finetune the skills to solve various downstream tasks. First, skills are extracted from a noisy offline dataset with human feedback to denoise behavioral priors. Second, skills are executed with RL in the environment with task-specific human feedback.
Figure 10. Skill preferences (SkiP) [166] consists of two phases. During the skill extraction phase, human feedback is used to learn skills. During the skill execution phase, human feedback is used to finetune the skills to solve various downstream tasks. First, skills are extracted from a noisy offline dataset with human feedback to denoise behavioral priors. Second, skills are executed with RL in the environment with task-specific human feedback.
Sensors 22 03043 g010
Figure 11. A feature model for precision agriculture farm management [172].
Figure 11. A feature model for precision agriculture farm management [172].
Sensors 22 03043 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Holzinger, A.; Saranti, A.; Angerschmid, A.; Retzlaff, C.O.; Gronauer, A.; Pejakovic, V.; Medel-Jimenez, F.; Krexner, T.; Gollob, C.; Stampfer, K. Digital Transformation in Smart Farm and Forest Operations Needs Human-Centered AI: Challenges and Future Directions. Sensors 2022, 22, 3043. https://doi.org/10.3390/s22083043

AMA Style

Holzinger A, Saranti A, Angerschmid A, Retzlaff CO, Gronauer A, Pejakovic V, Medel-Jimenez F, Krexner T, Gollob C, Stampfer K. Digital Transformation in Smart Farm and Forest Operations Needs Human-Centered AI: Challenges and Future Directions. Sensors. 2022; 22(8):3043. https://doi.org/10.3390/s22083043

Chicago/Turabian Style

Holzinger, Andreas, Anna Saranti, Alessa Angerschmid, Carl Orge Retzlaff, Andreas Gronauer, Vladimir Pejakovic, Francisco Medel-Jimenez, Theresa Krexner, Christoph Gollob, and Karl Stampfer. 2022. "Digital Transformation in Smart Farm and Forest Operations Needs Human-Centered AI: Challenges and Future Directions" Sensors 22, no. 8: 3043. https://doi.org/10.3390/s22083043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop