Viewpoint Generation using Feature-Based Constrained Spaces for Robot Vision Systems

—The efficient computation of viewpoints under consideration of various system and process constraints is a common challenge that any robot vision system is confronted with when trying to execute a vision task. Although fundamental research has provided solid and sound solutions for tackling this problem, a holistic framework that poses its formal description, considers the heterogeneity of robot vision systems, and offers an integrated solution remains unaddressed. Hence, this publication outlines the generation of viewpoints as a geometrical problem and introduces a generalized theoretical framework based on Feature-Based Constrained Spaces ( C -spaces) as the backbone for solving it. A C -space can be understood as the topological space that a viewpoint constraint spans, where the sensor can be positioned for acquiring a feature while fulfilling the regarded constraint. The present study demonstrates that many viewpoint constraints can be efficiently formulated as C -spaces providing geometric, deterministic, and closed solutions. The introduced C -spaces are characterized based on generic domain and viewpoint constraints models to ease the transferability of the present framework to different applications and robot vision systems. The effectiveness and efficiency of the concepts introduced are verified on a simulation-based scenario and validated on a real robot vision system comprising two different sensors.


I. INTRODUCTION
The increasing performance of 2D and 3D image processing algorithms and the falling prices on electronic components (processors and optical sensors) over the last two decades, have motivated not only researchers but also the industry to investigate and automate different machine vision tasks using robot vision systems (RVSs) consisting of a manipulator and a 2D or 3D sensor [1,2].Whether programmed offline or online, RVSs demand multiple planning modules to execute motion and vision tasks efficiently and robustly.For instance, the efficient and effective planning of valid viewpoints to fulfill a vision task considering different constraints known as-the view(point) planning problem (VPP)-still represents an open planning problem within diverse applications [2,3], The authors are with the Institute for Machine Tools and Industrial Management of the Technical University of Munich in Munich Germany.

A. Viewpoint Generation Problem solved using C-spaces
To tackle the VPP, we first re-examine its reformulation and propose its modularization.Then, this study focuses on the most fundamental sub-problem of the VPP, i.e., the Viewpoint Generation Problem (VGP).The VGP addresses the calculation of valid viewpoints to acquire a single feature considering the fulfillment of different viewpoint constraints.
With this in mind, this paper outlines the VGP as a purely geometrical problem that can be solved in the special Euclidean denoted as SE(3) (6D spatial space) based on the concept of Feature-Based Constrained Spaces (C-spaces).C-spaces represent the spatial solution space up to 6D of each viewpoint constraint c i ∈ C denoted as C i that comprises all valid sensor poses p s to acquire a feature f .In other words, it can be assumed that any sensor pose lying w ithin an i C-space fulfills the corresponding i viewpoint constraint.Hence, this solution space can be interpreted as an analytical [5], geometrical solution with an infinite set of valid viewpoints to satisfy the regarded viewpoint constraint.Moreover, the integration of multiple C-spaces spans the jointed C-space denoted as C , where all viewpoint constraints are simultaneously fulfilled.Figure 1 depicts a simplified representation of the VGP, C-spaces, and overview of the most relevant components.
In this context, the most significant challenge behind the conceptualization of C-spaces lies in the generic and geometric formulation and characterization 1 of diverse viewpoint constraints.This publication introduces nine C-spaces corresponding to different viewpoint constraints (i.e., sensor imaging parameters, feature geometry, kinematic errors, sensor accuracy, occlusion, multisensors, multi-features, and robot workspace) aligned to a consistent modeling framework to ensure their consistent integration.

B. Outline
After providing an overview of the related work that has addressed the VPP and VGP in Section II, our work comprises four sections that describe the core concepts for formulating and characterizing C-spaces of different viewpoint constraints.
First, Section III presents the domain models of a generic RVS's used through the present study.Section IV introduces the fundamental formulations of the VGP and C-spaces.Exploiting these formulations, the fundamental C-space based on the sensor imaging parameters and the feature position is characterized in Section VI.Using this core C-space, the geometrical formulations of the rest of the viewpoint constraints and a strategy for their integration are introduced in Section VII.
Finally, in Section VIII we assess the validity of our formulations and characterization of C-spaces and demonstrate its potential and generalization using a real RVS.

C. Contributions
Our publication presents the fundamental concepts of a generic framework comprising innovative and efficient formulations to compute valid viewpoints based on C-spaces to solve the fundamental sub-problem of the VPP, i.e. the VGP.The key contributions of this paper are summarized as follows: • Mathematical, model-based, and modular framework to formulate the VGP based on C-spaces and generic domain models • Formulation of nine viewpoint constraints using linear algebra, trigonometry, geometric analysis, and Constructive Solid Geometry (CSG) Boolean operations, in particular: -Efficient and simple characterization of C-space based on sensor frustum, feature position and feature geometry -Generic characterization of C-spaces to consider bistatic nature of range sensors extendable to multisensor systems • Exhaustive supporting material (surface models, manifolds of computed C-spaces, rendering results) to encourage benchmark and further development (see supplement material).Additionally, we consider the following principal advantages associated with the formulation of C-spaces: • Determinism, efficiency, and simplicity: C-spaces can be efficiently characterized using geometrical analysis, linear algebra, and CSG Boolean techniques.• Generalization, transferability, and modularity: C-spaces can be seamlessly used and adapted for different vision tasks and RVSs, including different sensor imaging sensors (e.g., stereo, active light sensors) or even multiple range sensor systems.• Robustness against model uncertainties: Known model uncertainties (e.g., kinematic model, sensor, or robot inaccuracies) can be explicitly modeled and integrated while characterizing C-spaces.If unknown model uncertainties affect a chosen viewpoint, alternative solutions guaranteeing constraint satisfiability can be found seamlessly within C-spaces.In combination with a suitable strategy, C-spaces can be straightforwardly integrated into a holistic approach for entirely solving the VPP.The use of C-spaces within an adequate strategy represents the second sub-problem of the VPP, which falls outside the scope of this paper and will be handled within a future publication.

II. RELATED WORK
Our study treats the VGP as a sub-problem of the VPP.Since most authors do not explicitly consider such a problem separation, this section provides an overview of related research that addresses the VPP in general.In a broader sense, the VPP can be even categorized as a sub-problem of a more popular challenge within robotics, i.e., the coverage path planning problem [6].
Over the last three decades, the VPP has been investigated within a wide range of vision tasks that integrate an imaging device, but not necessarily a robot, and require the computation of generalized viewpoints.For a vast overview of the overall progress, challenges, and applications of the VPP, we refer to the various surveys [2,4,[7][8][9][10] that have been published.
The approaches for viewpoint planning can be classified depending on the knowledge required a priori about the RVS to compute a valid viewpoint.Thus, a rough distinction can be made between model-based and non-model-based approaches [11].

A. Model-Based
Most of the model-based viewpoint planning methods can be roughly differentiated between synthesis and sampling-based (related terms: generate and test) modeling approaches [5].While synthesis approaches use analytical relationships to first characterize a continuous or discrete solution space before searching for an optimal viewpoint, sampling techniques are more optimization-oriented and compute valid viewpoints using a set of objective functions.
Since, the present study seeks to characterize a solution space, i.e., a C-space, for each individual viewpoint constraint using analytical and geometrical relationships, the mathematical foundation of our framework can be classified as a model-based method following a synthesis approach.Hence, this section focuses mainly on the related literature following a similar approach.
1) Synthesis : Many of the reviewed publications considering model-based approaches have built their theoretical framework based on set theory to formulate either a continuous or discrete search space in a first step.Then, in a second step, optimization algorithms are used to find valid viewpoints within these search spaces and assess the satisfiability of the remaining constraints that were not explicitly considered.
The concept of characterizing such topological search spaces, in our work addressed as C-spaces (related terms: viewpoint space, visibility map, visibility matrix, visibility volumes, imaging space, scannability frustum, configuration space, visual hull, search space), has been proposed since the first studies addressing the VPP.Such a formulation has the advantage of providing a straightforward comprehension and spatial interpretation of the general problem.
One of the first seminal studies that considered the characterization of a continuous solution space in R 3 can be attributed to the publication of Cowan and Kovesi [12].In their work, they introduced a model-based method for 2D sensors, which synthesized analytical relationships to characterize a handful of constraints geometrically: resolution, focus, field of view, visibility, view angle, occluding regions, and in later works [13] even constraints for placement of a lighting source.
Based on the analytical findings provided by the previous works of Cowan and Bergman [13], Tarabanis et al. [7] introduced a model-based sensor planning system called Machine Vision Planner (MVP).On the one hand, the MVP can be seen as a synthesis approach that characterizes a featurebased occlusion-free region using surface model decomposition [14,15].On the other hand, the authors posed the problem in the context of an optimization setting using objective functions to find valid viewpoints within the occlusion-free space that meet imaging constraints.
The MVP was extended by Abrams et al. [16] for its use with an industrial robot and moving objects.Their study addressed the drawbacks (non-linearity and convergence guarantee) of the optimization algorithms and opted to characterize 3D search spaces for the sensor's resolution, field of view, and workspace of the robot.Although the authors could not synthesize every constraint in the Euclidean space, they confirmed the benefits of solving the problem in R 3 instead of optimizing equations for finding suitable viewpoints.Furthermore, in a series of publications Reed et al. [17,18] extended some of the models introduced in the MVP and addressed the characterization of a search space in R 3 for range sensors, which integrates imaging, occlusion, and workspace constraints.Their study also proposed the synthesis of an imaging space based on extrusion techniques of the surface models in combination with the imaging parameters of the sensor.
Another line of research within the context of model-based approaches follow the works of Tarbox and Gottschlich, which proposed the synthesis of a discretized search space using visibility matrices to map the visibility between the solution space and the surface space of the object.In combination with an efficient volumetric representation of the object of interest using octrees, Tarbox and Gottschlich [19] and Tarbox and Gottschlich [20] presented different algorithms based on the concept of visibility matrices to perform automated inspection tasks.The visibility matrices consider a discretized view space with all viewpoints lying on a tessellated sphere with a fixed camera distance.Analogously, under the consideration of further constraints, [11,21] introduced the measurability matrix extending the visibility matrix of Tarbox and Gottschlich to three dimensions.Within his work, Scott considered further sensor parameters, e.g., the shadow effect, measurement precision, and the incident angle, which many others have neglected.More recent works [3,[22][23][24], confirmed the benefits of such an approach and used the concept of visibility matrices for encoding information between a surface point and a set of valid viewpoints.
In the context of space discretization and feature-driven approaches, further publications [20,25,26] suggested the characterization of the positioning space for the sensor using tessellated spheres to reduce the 6D sensor positioning problem to a 2D orientation optimization problem.Similarly, Stößel et al. [27] and Ellenrieder et al. [28] introduced the concept of visibility maps to encode feature visibility mapped to a visibility sphere.Years later, Raffaeli et al. [29] and Koutecký et al. [30] considered variations of this approach for their viewpoint planning systems.The major shortcomings of techniques considering a problem reduction is that most of them require a fixed working distance, which limits their applicability for other sensors and reduces its efficiency for the computation of multi-feature acquisition.
In the context of laser scanners, other relevant works, for example, [31], [32], and [33] also considered solutions to first synthesize a search space before searching for feasible solutions.Also, the publication of Park et al. [34] needs to be mentioned, since besides Scott [21] it is one of the few authors who considered viewpoint visibility for multisensor systems using a lookup table.
2) Sampling-Based: Many other works [35][36][37][38][39] do not rely on the explicit characterization of a search space and assess the satisfiability of each viewpoint constraint individually by sampling the search space using metaheuristic optimization algorithms, e.g., simulated annealing or evolutionary algorithms.
Such approaches concentrate on the adequate and efficient formulation of objective functions to satisfy the viewpoint constraints and find reasonable solutions.

B. Non-Model Based
In contrast, non-model based approaches require no a priori knowledge; the object can be utterly unknown to the planning system.In this case, online exploratory techniques based on the captured data are used to compute the next best viewpoint [26,[40][41][42][43].Most of these works focus on reconstruction tasks and address the problem as the next-bestview planning problem.Since our work is considered a featuredriven approach requiring a priori knowledge of the system, this line of research will be not further discussed.

C. Comparison and Need for Action
Although over the last three decades, many works presented well-grounded solutions to tackle the VPP for individual applications, a generic approach for solving the VPP has not been established yet for commercial or industrial applications nor research.Hence, we are convinced that a well-founded framework, comprising a consistent formulation of viewpoints constraints combined with a model-based synthesis approach considering a continuous solution space, has the greatest potential to search for viewpoints that efficiently satisfy different viewpoints constraints.
Synthesis vs. Sampling Within the related works, we have found recent publications following an explicit synthesis and sampling techniques of solution spaces for the same applications.Hence, a clear trend towards any of these model-based approaches could not be identified.On the one hand, sampling methods can be especially advantageous and computationally efficient within simple scenarios considering few constraints.On the other hand, model uncertainties and nonlinear constraints are more difficult to model using objective functions and within multi-feature scenarios the computation efficiency can be severely affected.Therefore, we think this problem can be solved more efficiently based on C-spaces composed of explicit models of all regarded viewpoint constraints within applications comprising robot systems and partially known environments with modeling uncertainties.
Continuous vs. Discrete Space Most of the latest research have followed a synthesis approach based on visibility matrices or visibility maps to encode the surface space and viewpoint space together for a handful of applications and systems [3,[22][23][24]30].Although these works have demonstrated the use of discrete spaces to be practical and efficient, from our point of view, its major weakness lies in the intrinsic limited storage capacity and processing times associated with matrices.This limitation directly affects the synthesis of the solution space considering just a fixed distance between sensor and object.Moreover, in the context of RVS, limiting the robot's working space seems to be in conflict with the inherent and most appreciated positioning flexibility of robots.Due to these drawbacks and taking into account that the fields of view and working distances of sensors have been and will continue to increase, we consider that the discretization of the solution space could become inefficient for certain applications at some point.
Problem Formulation Many of the revised works considering synthesized model-based approaches posed the VGP formulation on the fundamentals of set theory.However, our research suggests that a consistent mathematical framework, which promotes a generic formulation and integration of viewpoint constraints, has not been appropriately placed.Hence, we consider that an exhaustive domain modeling and consistent theoretical mathematical framework are key elements to provide a solid base for a holistic and generic formulation of the VGP.

III. DOMAIN MODELS OF A ROBOT VISION SYSTEM
This section outlines the generic domain models and minimal necessary parameters of an RVS, including assumptions and limitations, required to characterize the individual C-spaces in Sections VI and VII.
We consider an RVS, a complex mechatronical system that comprises the following domains: a range sensor (s) that is positioned by a robot (r) to capture a feature (f ) of an object of interest (o) enclosed within an environment (e). Figure 2 provides an overview of the RVS domains and some parameters described within this section.
A. General Notes 1) General Requirements: The present paper follows a systematic and exhaustive formulation of the VGP, the domains of an RVS, and the viewpoint constraints to characterize C-spaces in a generic, simple, and scalable way.To achieve this, and similar to previous studies [11,33,36], throughout our framework following general requirements (GR) are considered: generalization, computational efficiency, determinism, modularity and scalability, and limited a priori knowledge.The given order does not consider any prioritization of the requirements.A more detailed description of the requirements can be found in Table IV.
2) Terminology: Based on our literature research, we have found that a common terminology has not been established yet.The employed terms and concepts depend on the related applications and hardware.To better understand our terminology's relation to the related work and in an attempt towards standardization, whenever possible, synonyms or related concepts are provided.Please note that in some cases, the generality of some terms is prioritized over their precision.This may lead to some terms not corresponding entirely to our definition; therefore, we urge the reader to study these differences before treating them as exact synonyms.
3) Notation: Our publication considers many variables to describe the RVS domains comprehensively.To ease the identification and readability of variables, parameters, vectors, frames, and transformations we use the index notation given in Table V.Moreover, all topological spaces are given in calligraphic fonts, e.g., V, P, I, C, while vectors, matrices, and rigid transformations are bold.
It is assumed that all introduced transformations are roughly known and given in the world (w) coordinate system or any other reference system.Moreover, we also consider a summed alignment error ϵ e in the kinematic chain to quantify the sensor's positioning inaccuracy relative to a feature.
2) Surface model: A set of 3D surface models κ ∈ K characterizes the volumetric occupancy of all rigid bodies in the environment.The surface models are not always explicitly mentioned within the domains.Nevertheless, we assume that the surface model of any rigid body is required if this collides with the robot or sensor or impedes the sensor's sight to a feature.
simplicity purposes, we use mostly the Euler angles representation throughout this paper.Considering the special orthogonal

C. Object
This domain considers an object (o) (related terms: object of interest, workpiece, artifact-, measurement-, inspection-or test-object) that contains the features to be acquired.
1) Kinematics : The origin coordinate system of o is located at frame B o .The transformation to the reference coordinate system is given in the world coordinate system B w by w o T e .2) Surface Model: Since our approach does not focus on the object but rather on its features, the object may have an arbitrary topology.

D. Feature
A feature (f ) (related terms: region, point or area of interest, inspection feature, key point, entity, artifact) can be fully specified considering its kinematic and geometric parameters, i.e., frame B f and set of surface points G f (L f ), which depend on a set of geometric dimensions L f : (2) 1) Kinematics : We assume that the translation o t f and orientation 1 o r f of the feature's origin is given in the object's coordinate system B o .Thus, the feature's frame is given as follows: 2) Geometry: While a feature can be sufficiently described by its position and normal vector, a broader formulation is required within many applications.For example, dimensional metrology tasks deal with a more comprehensive catalog of geometries, e.g., edges, pockets, holes, slots, and spheres.
Thus, the present study considers explicitly the geometrical topology of a feature and a more extensive model of it [15,28].Let the feature topology be described by a set of geometric parameters, denoted by L f , such as the radius of a hole or a sphere or the lengths of a square.
3) Generalization and Simplification of the Feature Geometry : Moreover, we consider a discretized geometry model of a feature comprising a finite set of surface points corresponding to a feature g f ∈ G f with g f ∈ R 3 .Since our work primarily focuses on 2D features, it is assumed that all surface points lie on the same plane, which is orthogonal to the feature's normal vector o n f and collinear to the z-axis of the feature's frame B f .
Towards providing a more generic feature's model, the topology of all features is approximated using a square feature with a unique side length of {l f } ∈ L F and five surface points g f ,c , c = {0, 1, 2, 3, 4} at the center and at the four corners of the square.Figure 3 visualizes this simplification to generalize diverse feature geometries.
Fig. 3: A square feature with the length l f comprising five surface points g f ,c , is used to generalize any feature topology, e.g, a circle, a slot, or a star (complex geometry) .

E. Sensor
We consider a sensor (s) (related terms: range camera sensor, 3D sensor, imaging system): a self-contained acquisition device comprising at least two imaging devices {s 1 , s 2 } ∈ S (e.g., two cameras or a camera and a lighting source) capable of computing a range image containing depth information.Such sensors can be classified by the principles used to acquire this type of depth information, e.g., triangulation, intensity, or time of flight [45].The present work does not explicitly distinguish between these acquisition principles.Moreover, this subsection outlines a generic and minimal sensor model that is in line with our framework.Note that even though the present report focuses primarily on range sensors, the models can also be considered for single imaging devices.
1) Kinematics: The sensor's kinematic model considers the following relevant frames: B T CP s ,B s1 s , and B s2 s .Taking into account the established notation for end effectors within the robotics field, we consider that the frame B T CP s lies at the sensor's TCP.We assume that the frame of the TCP is located at the geometric center of the frustum space and that the rigid transformation ref T CP T s to a reference frame such as the sensor's mounting point is known.
Additionally, we consider that frame B s1 s lies at the reference frame of the first imaging device that correspond to the imaging parameters I s .We assume that the rigid transformation ref s1 T s between the sensor lens and a known reference frame are also known.ref s2 T s provides the transformation of the second imaging device at the frame B s2 s .The second imaging device s 2 might be a second camera considering a stereo sensor or the light source origin in an active sensor system.
2) Frustum space : The frustum space I-space (related terms: visibility frustum, measurement volume, field-of-view space, sensor workspace) is described by a set of different sensor imaging parameters I s , such as the depth of field d s and the horizontal and vertical field of view (FOV) angles θ x s and ψ y s .Alternatively, some sensor manufacturers may also provide the dimensions and locations of the near h near s , middle h middle s , and far h f ar s viewing planes of the sensor.The sensor parameters I s allow to just describe the topology of the I-space.To fully characterize the topological space in the special Euclidean, the sensor pose p s must be considered: The I-space can be straightforwardly calculated based on the kinematic relationships of the sensor and the imaging parameters.The resulting 3D manifold I s is described by its vertices V T with k = 1, . . ., l and corresponding edges and faces.We assume that the origin of the frustum space is located at the TCP frame, i.e., B Is s = B T CP s .The resulting shape of the I-space usually has the form of a square frustum.Figure 4 visualizes the frustum shape and the geometrical relationships of the I-space.
3) Range Image: A range image (related terms: 3D measurement, 3D image, depth image, depth maps, point cloud) refers to the generated output of the sensor after triggering a measurement action.A range image is described as a collection of 3D points denoted by g s ∈ R 3 , where each point corresponds to a surface point of the measured object.
4) Measurement accuracy: The measurement accuracy depends on various sensor parameters and external factors and may vary within the frustum space [21].If these influences are quantifiable, 2 an accuracy model can be considered within the computation of the C-space.
5) Sensor Orientation: When choosing the sensor pose for measuring an object's surface point or a feature, additional constraints must be fulfilled regarding its orientation.One fundamental requirement that must be satisfied to guarantee the acquisition of a surface point is the consideration of the incidence angle f φ s (related terms: inclination, acceptance, view, or tilt angle).This angle is expressed as the angle between Fig. 4: Detailed kinematic and imaging model of the sensor in the x-z plane.The frustum space I s is spanned by the imaging parameters of the sensor (d s , h near s , h f ar s , θ x s , ψ y s ) ∈ I s considering a sensor pose p s .The I-space is described by a minimum of eight vertices V I s 1−8 (note that in this 2D view the vertices 5-8 lie on the far x-z plane and that the FOV angle ψ y s is not illustrated).the feature's normal n f and the sensor's optical axis (z-axis) e z s and can be calculated as follows: The maximal incidence angle f φ max s is normally provided by the sensor's manufacturer 3 .The incidence angle can also be expressed on the basis of the Euler angles (pan, tilt) around the x-and y-axes: f φ s ( f β y s , f γ x s ).Furthermore, the rotation of the sensor around the optical axis is given by the Euler angle α z s (related terms: swing, twist).Normally, this angle does not directly influence the acquisition quality of the range image and can be chosen arbitrarily.Nevertheless, depending on the lighting conditions or the position of the light source considering active systems, this angle might be more relevant and influence the acquisition parameters of the sensor, e.g., the exposure time.Additionally, if the shape of the frustum is asymmetrical, the optimization of α z s should be considered.

F. Robot
The robot (related terms: manipulator, industrial robot, positioning device) has the main task to position the sensor to acquire a range image.
1) Kinematics and Workspace: The robot base coordinate frame is placed at B r .We assume that the rigid transformations between the robot basis and the robot flange, r fr T r , and between the robot flange and the sensor, fr s T r , are known.We also assume that the Denavit-Hartenberg (DH) parameters are known and that the rigid transformation r fr T r (DH) can be calculated using an inverse kinematic model.The sensor pose in the robot's coordinate system is given by The robot workspace is considered to be a subset in the special Euclidean, thus W r ⊆ SE(3).This topological space comprises all reachable robot poses to position the sensor r p s ∈ W r .
2) Robot Absolute Position Accuracy : It is assumed that the robot has a maximal absolute pose accuracy error of ϵ r in its workspace and that the robot repeatability is much smaller than the absolute accuracy, hence, it is not further considered.

G. Environment
The environment domain comprises models of remaining components that were not explicitly included by other domains.Particularly, we consider all other rigid bodies that may collide with the robot or affect the visibility of the sensor, e.g., fixtures, external axes, robot cell components.Thus, if the environment domain comprises rigid bodies, these must be included in the set of surface models κ e ∈ K.

H. General assumptions and limitations
The previous subsections have introduced the formal models and parameters to characterize an RVS.Hereby, we present some general assumptions and limitations considered within our work.
• Sensor compatibility with feature geometry: Our approach assumes that a feature and its entire geometry can be captured with a single range image.

IV. PROBLEM FORMULATION
This section first introduces the concept of generalized viewpoints and briefly describes the viewpoint constraints considered within the scope of our work.Then the modularization of the VPP and formulation of the VGP as a geometric problem are introduced to understand the placement of the present study.In Subsection IV-E C-spaces are introduced within the context of Configuration Spaces as a practical and simple approach for solving the VGP.Moreover, considering that various viewpoint constraints must be satisfied to calculate a valid viewpoint, we outline the reformulation of the VGP based on C-spaces within the framework of Constraint Satisfaction Problems.

A. Viewpoint and V-space
Although, the concept of generalized viewpoints has been introduced by some of the related works 3 , there seems to be no clear definition of a viewpoint v. Hence, in this study, considering a feature-centered formulation we define a viewpoint being a triple of following elements: a sensor pose p s ∈ SE(3) to acquire a feature f ∈ F considering a set of viewpoint constraints C from any domain of the RVS: Additionally, we consider that any viewpoint that satisfies all constraints is an element of the viewpoint space (V-space): Hence, the V-space can be formally defined as a tuple comprising a feature space denoted by a feature set F and the C-space F C ( C), which satisfies all spatial viewpoint constraints to position the sensor: Note that within this publication, we just consider spatially viewpoint constraints affecting the placement of the sensor.As given by the limitations of our work in Subsection III-H, additional sensor setting parameters are not explicitly addressed.Nevertheless, for purposes of correctness and completeness, let these constraints be denoted by C s , then Equation 7 can be extended as follows:

B. Viewpoint Constraints
To provide a comprehensive model of a generalized viewpoint and assess its validity, it is necessary to formulate a series of viewpoint constraints.Hence, we propose an abstract formulation of the viewpoint constraints needed to acquire a feature successfully.The set of viewpoint constraints c i ∈ C, i = 1, . . ., j comprises all constraints c i affecting the positioning of the sensor; hence, the validity of a viewpoint candidate.Every constraint c i can be regarded as a collection of domain variables of the regarded RVS, e.g., the imaging parameters I s , the feature geometry length l f , the maximal incidence angle f φ max s .This subsection provides a general description of the constraints; a more comprehensive formulation and characterization are handled individually within Sections VI and VII.An overview of the viewpoint constraints considered in our work is given in Table I.
Although some related studies [11,12,20,36] also considered similar constraints, the main differences to our formulations are found in their explicit characterization and integration with other constraints.While some of these works assess a viewpoint's validity in a reduced 2D space or sampled space, our work focuses on characterizing each constraint explicitly in a higher dimensional and continuous space.

C. Modularization of the Viewpoint Planning Problem
The necessity to break down the VPP into two sub-problems can be better understood by considering the following minimal problem formulation based on Tarbox and Gottschlich [20]: Problem 1.How many viewpoints are necessary to acquire a given set of features 4 ?
We believe that considering a multi-stage solution to tackle the VPP can reduce its complexity and contribute to a more efficient solution.Thus, in the first step, we consider the modularization of the VPP and address its two fundamental problems separately: the VGP and the Set Covering Problem (SCP).
First, we attribute to the VGP the computation of valid viewpoints to acquire a single feature considering a set of various viewpoint constraints.Moreover, in the context of multi-feature scenarios and presuming that all features cannot be acquired using a single viewpoint, the efficient selection of more viewpoints becomes necessary to complete the vision task, with which arises a new problem, i.e., the SCP.
This paper concentrates on the comprehensively formulation of the VGP, so this problem is discussed more extensively in the following sections.

D. The Viewpoint Generation Problem
The VGP (related terms: optical camera placement, camera planning) and concept of viewpoints can be better understood considering a proper formulation: Problem 2. Which is a valid viewpoint v to acquire a feature f considering a set of viewpoint constraints C?
A viewpoint v exists, if there is at least one sensor pose p s that can capture a feature f and only if all j viewpoint constraints C are fulfilled.The most straightforward way to find a valid viewpoint for Problem 2 is to assume an ideal sensor pose p 0 s and assess its satisfiability against each constraint using a binary function h i : (p 0 s , c i ) → true.If the sensor pose fulfills all j constraints, the viewpoint is valid.Otherwise, another sensor pose must be chosen and the process must be repeated until a valid viewpoint is found.The mathematical formulation of such conditions are expressed as follows: The formulation of a generalized viewpoint as given by Equation ( 8) can be considered one of the most straightforward formulations to solve the VGP, if for each viewpoint TABLE I: Overview and description of the viewpoint constraints considered in our work.

Frustum space
The most restrictive and fundamental constraint is given by the imaging capabilities of the sensor.This constraint is fulfilled if at least the feature's origin lies within the frustum space (cf.Subsection III-E2).

Sensor Orientation
Due to specific sensor limitations, it is necessary to ensure that the maximal permitted incidence angle between the optical axis and the feature normal lies within an specified range, see Equation ( 5).

Feature Geometry
This constraint can be considered an extension of the first viewpoint constraint and is fulfilled if all surface points of a feature can be acquired by a single viewpoint, hence lying within the image space.

Kinematic Error
Within the context of real applications, model uncertainties affecting the nominal sensor pose compromise a viewpoint's validity.Hence, any factor, e.g., kinematic alignment (Subsection III-B1), robot's pose accuracy (Subsection III-F2), affecting the overall kinematic chain of the RVS must be considered.

Sensor Accuracy
Acknowledging that the sensor accuracy may vary within the sensor image space (see Subsection III-E4), we consider that a valid viewpoint must ensure that a feature must be acquired within a sufficient quality.

Feature Occlusion
A viewpoint can be considered valid if a free line of sight exists from the sensor to the feature.More specifically, it must be assured that no rigid bodies are blocking the view between the sensor and the feature.

Bistatic Sensor and Multisensor
Recalling the bistatic nature of range sensors, we consider that all viewpoint constraints must be valid for all lenses or active sources.Furthermore, we also extend this constraint for considering a multisensor RVS comprising more than one range sensor.

Robot Workspace
The workspace of the whole RVS is limited primarily by the robot's workspace.Thus, we assume that a valid viewpoint exists if the sensor pose lies within the robot workspace.

Multi-Feature
Considering a multi-feature scenario, where more than one feature can be acquired from the same sensor pose, we assume that all viewpoint constraints for each feature must be satisfied within the same viewpoint constraint, a Boolean condition can be expressed.For instance, by introducing such cost functions for different viewpoint constraints, several works [9,21,36,37,[46][47][48] demonstrated that optimization algorithms (e.g., greedy, genetic, or even reinforcement learning algorithms) can be used to find local and global optimal solutions within polynomial times.

E. VGP as a geometrical Problem in the context of Configuration Spaces
Although the generalized viewpoint model as given by Equation ( 8) yields a sufficient and generic formulation to solve the VGP, this formulation is inefficient considering real applications with model uncertainties.System modeling inevitably involves discrepancies between virtual and real-world models, particularly within dynamically changing environments.Due to such model inconsistencies, considering optimal viewpoints to acquire a feature could be regarded as ineffective and inefficient in some applications.Hence, in our opinion, it is more reasonable to treat the VGP as a multi-dimensional problem and to consider multiple valid solutions throughout its formulation.
A sound solution for the VGP will require characterizing a continuous topological space comprising multiple solutions that allow deviating from an optimal solution and efficiently choosing an alternative viewpoint.This challenge embodies the core motivation of our work to formulate and characterize C-spaces.
If the VGP can be handled as a spatial problem that can be solved geometrically, we refer to the use of configuration spaces, as introduced by Lozano-Pérez [49] and Latombe [50] and exhaustively studied by LaValle [51], in the well-studied motion planning field for solving geometrical path planning problems.
"Once the configuration space is clearly understood, many motion planning problems that appear different in terms of geometry and kinematics can be solved by the same planning algorithms.This level of abstraction is therefore very important."[51] In our work, we use the general concepts of configuration spaces based on the formulation of topological spaces to characterize the manifold spanned by a viewpoint constraint-the C-space.

F. VGP with ideal C-spaces
Following the notation and concepts behind the modeling of configuration spaces, we first consider a modified formulation of Problem 2 and assume an ideal system (i.e., sensor with an infinite field of view, without occlusions and neglecting any other constraint) for introducing some general concepts: Problem 3. Which is the ideal C-space C * to acquire a feature f ?
Sticking to the notation established within the motion planning research field, let us first consider an ideal, unconstrained space denoted as C * ⊆ SE(3) which is spanned by the Euclidean Space R 3 and the special orthogonal group SO(3), and holds all valid sensor poses p s to acquire a feature f .An abstract representation of the unconstrained C-space C * is visualized in Figure 5.

G. VGP with C-spaces
The ideal C-space as given by Equation 9considers a sufficient generic model that spans an ideal solution space to solve the VGP.Assuming a non-ideal RVS where a viewpoint must satisfy a handful of requirements, an extended formulation of the C-space admitting viewpoint constraints is introduced within this subsection.
1) Motivation: The VGP recalls the formulation of decision problems, a class of computational problems, which has been widely researched within different applications.Inspired by other research fields dealing with artificial intelligence and optimization of multi-domain applications, we observed that decision problems, including multiple constraints, can be well formulated under the framework of Constraint Satisfaction Problems (CSP) [52].This category of problems does not consider an explicit technique to formulate the regarded constraints.Moreover, a consistent, declarative, and simple representation of the domain's constraints can be decisive for their efficient resolution [53].
2) Formulation : Formulating the VGP as a CSP requires a proper formulation to consider viewpoint constraints in the first step; hence, let be extended by the following: Problem 4. Which is the C-space C spanned by a set of viewpoint constraints C to acquire a feature f ?
The C-space denoted as can be understood as the topological space that all viewpoint constraints from the set C span in the special Euclidean so that the sensor can capture a feature f fulfilling all of these constraints.The C-space that a single viewpoint constraint c i ∈ C spans, is analogously given: To guarantee a consistent formulation and integration of various viewpoint constraints, within our framework we consider the following characteristics for the formulation of C-spaces: 1) If an i constraint, c i , can be spatially modeled, there exists a topological space denoted as C i , which can be ideally formulated as a proper subset of the special Euclidean: In a broader definition, we consider that the topological space for each constraint is spanned by a subset of the Euclidean Space denoted as T s ⊆ R 3 and a special orthogonal group subset given by R s ⊆ SO(3).Hence, the topological space of a viewpoint constraint is given as follows: 2) If there exists at least one sensor pose in the i C-space ∃ p s ∈ C i , then this sensor pose fulfills the viewpoint constraint c i to acquire feature f ; hence, a valid viewpoint exists (f, p s , c i ) ∈ V .3) If there exists a topological space, C i , for each constraint ∀c i ∈ C then the intersection of all individual constrained spaces constitutes the joint C-space C ⊆ SE(3): 4) If the joint constrained space is a non-empty set, i.e., C ̸ = ∅, then there exists at least one sensor pose ∃ p s ∈ C and consequently a viewpoint (f, p s , C) ∈ V that fulfills all viewpoint constraints.An abstract representation of the C-spaces and the resulting topological space C intersected by various viewpoint constraints is depicted in Figure 5.It is worth mentioning that although the framework considers an independent formulation of each viewpoint constraint, the real challenge consists on characterizing each constraint individually to maintain a high generalization and flexibility of the framework (cf.Table IV).
V. TECHNICAL SETUP This section provides an overview of the hardware and software used for the characterization of the C-spaces within the following sections.We briefly introduce the considered domains, parameters, and specifications that were employed to verify the individual formulations of C-spaces presented in Sections VI and VII.

A. Domain Models
• Sensors: We used two different range sensors for the individual verification of the C-spaces and the simulationbased and experimental analyses.The imaging parameters (cf.Subsection III-E2) and kinematic relations of both sensors are given in Table II.The parameters of the lighting source of the ZEISS Comet PRO AE sensor are conservatively estimated values, which guarantee that the frustum of the sensor lies completely within the field of view of the fringe projector.A more comprehensive description of the hardware is provided in Section VIII.• Object, features, and occlusion bodies: For verification purposes, we designed an academic object comprising three features and two occlusion objects with the characteristics given in Table VII in the Appendix.• Robot: We used a Fanuc M-20ia six-axis industrial robot and respective kinematic model to compute the final viewpoints to position the sensor.

B. Software
The backbone of our framework was developed based on the Robot Operating System (ROS) (Distribution: Noetic Ninjemys) [54].The framework was built upon a knowledge-based serviceoriented architecture.A more detailed overview of the general conceptualization of the architecture and knowledge-base is provided in our previous works [55,56].
Most of our algorithms consists on generating and manipulating 3D manifolds.Hence, based on empirical studies, we evaluated different open-source Python 3 libraries and used them according to their best performance for diverse computation tasks.For example, the PyMesh Library from Zhou et al. [57] demonstrated the best computational performance for Boolean operations.On the contrary, we used the trimesh Library [58] for verification purposes and for performing ray-casting operations, due to its integration of the performance-oriented Embree Library [59].Additionally, for further verification, visualization, and user interaction purposes, we coupled the ROS kinematic simulation to Unity [60] using the ROS# library [61].
All operations were performed on a portable workstation Lenovo W530 running Ubuntu 20.04 with the following specifications: Processor Intel Core i7-4810MQ @2.80GHz, GPU Nvidia 3000KM, and 32 GB Ram.

VI. CORE C-SPACE USING FRUSTUM SPACE, FEATURE POSITION, AND SENSOR ORIENTATION
This section outlines the formulation and characterization of the C-space spanned by two fundamental viewpoint constraints, the imaging parameters, characterized by the frustum space I-space, the feature's position, and the sensor orientation.We systematically analyze how the sensor frustum space can be used to describe a C-space in SE(3) and introduce simple and generic formulations for its computation.
This section begins by introducing an extended formulation of the I-space aligned to the concept of configuration spaces.Then, by simplifying the VGP considering just a fixed sensor orientation, we show how the frustum space and the feature position are used to characterize the core C-space of our work.The second subsection shows how C-spaces can be combined to span a topological space in the special Euclidean using different sensor orientations.
For the benefit of the comprehension of the concepts introduced within this section, we consider a feature as just a single surface point with a normal vector.The manifolds of the computed C-spaces and additional supporting material from this and the following section are found in the digital appendix of this publication.

A. Frustum Space, Feature Position, and fixed Sensor Orientation
This section shows how the feature position and the frustum space I-space can be directly employed to characterize a C-space, which fulfills the first viewpoint constraint for a fixed sensor orientation.
1) Base Constraint Formulation: In the first step, we introduce a simple condition for the first constraint, c 1 := c 1 (g f ,0 , p s , I s ), which considers the feature (minimally represented by a surface point), and the frustum space, which is characterized by all imaging parameters and the sensor pose.Let c 1 be fulfilled for all sensor poses ∀ p s ∈ SE(3), if and only if the feature surface point lies within the corresponding frustum space at the regarded sensor pose: 2) Problem Simplification considering a fixed Sensor Orientation : Due to the limitations of some sensors regarding their orientation, it is a common practice within many applications to defined and optimize the sensor orientation beforehand.Then, the VGP can be reduced to an optimization of the sensor position t s .Hence, let condition (13) be reformulated to consider a fixed sensor orientation r f ix s ∈ SO(3) and to be true for all sensor positions ∀ t s ∈ R 3 that fulfill following condition:  3) Constraint Reformulation based on Constrained Spaces: Recalling the idea to characterize geometrically any viewpoint constraint (see Subsection IV-G), we find that the viewpoint constraint formulation of Equation ( 14) to be unsatisfactory.We believe that this problem can be solved efficiently using geometric analysis and assume there exists a topological space denoted by C 1 := C 1 (c 1 ), which can be characterized based on the I-space considering a fixed sensor's orientation.If such space exists, then all sensor positions within it fulfill the viewpoint constraint given by Equation (14).
Combining the formulation for C-spaces given by Equation 11and the viewpoint constraint condition from Equation ( 14), the formal definition of the topological space C 1 is given: 4) Characterization: Within the framework of our research, we found out that the manifold of C 1 can be characterized in different ways.This subsection presents two possible solutions to characterize the C-space as given by Equation 15 using analytic geometry. 1.

Extreme Viewpoints Interpretation
The simplest way to understand and visualize the topological space C 1 is to consider all possible extreme viewpoints to acquire a feature f .These viewpoints can be easily found by positioning the sensor so that each vertex (corner) of the I-space, V I s k , lies at the feature's origin B f , which corresponds to the position of the surface point g f ,0 .The position of such an extreme viewpoint corresponds to the k vertex V to acquire a feature f .2) Position the sensor so that the k vertex of the frustum space lies at the feature's origin, 3) Let the coordinates of the k vertex of the constrained space ref C 1 be equal to the translation vector of the sensor frame B ref s The left Figure 6 illustrates the geometric relations for computing C 1 and a simplified representation of the resulting manifolds in R 2 for the sensor TCP B T CP s and lens B s1 s .

2.
Homeomorphism Formulation Note that the manifold ref C 1 illustrated in Figure 6a has the same topology as the I-space.Thus, it can be assumed there exists a homeomorphism between both spaces such that h : Letting the function h correspond to a point reflection over the geometric center of the frustum space, the vertices of the manifold V ref C 1 can be straightforwardly estimated following the steps described in the Algorithm 2. The resulting manifold for the TCP frame is shown in Figure 6b.3) For each k vertex of the frustum space, compute its reflection transformation h across the reference pivot frame 4) Connect all vertices from V ref C 1 analogously to the vertices of the frustum space V I s to obtain the ref C 1 manifold.
• General Notes We considered the steps described in Algorithm 2 to be the most traceable strategy using a homeomorphism to compute the constrained space ref C 1 .Nevertheless, we do not refuse any alternative approach for its characterization.For instance, the same manifold of ref C 1 for any reference frame can also be obtained by first computing the reflection model of the frustum space I * s over its geometric center at B The manifold of I * s can then be just simply translated to the desired reference frame so that B s , considering that the TCP must be positioned at the origin of the feature . Moreover, our approach considers that the topological space spanned by C 1 exists if the following conditions hold: • the frames of all vertices of the frustum space V s,i (I s ), i = 1..j are known, • the frustum space is a watertight manifold, • and the space between connected vertices of the frustum space is linear; hence, adjacent vertices are connected only by straight edges.Throughout this paper, we characterize most of the C-spaces considering just the reference frame for the sensor lens s 1 ; hence, if not stated otherwise consider C 1 := s1 C 1 .
5) Verification: Any of the two formulations presented in this subsection can be straightforwardly extended to characterize the C-space C 1 in SE(3).However, we found the homeomorphism formulation to be the most practical way to compute the C 1 manifold.Hence, to verify the characterization of C 1 using this approach, we first defined a sensor orientation in SE(3) denoted as f0 r f ix s to acquire the feature f 0 .We computed then the I-space manifold using a total of j = 8 vertices 4 with the imaging parameters of s 1 from Table II and computed the reflected manifold of the I-space I * s as proposed by Equation 16.In the next step, we transformed I * s using the rigid transformation ) to obtain the C-space manifold T CP C 1 for the sensor TCP frame and the transformation f s1 T = f T CP T • T CP s1 T to characterize the manifold s1 C 1 for the sensor lens frame.
Figure 7 shows the resulting C 1 manifolds considering different sensor orientations.The left Figure 7a visualizes the s1 C 1 and T CP C 1 manifolds considering following orientation in On the other hand, Figure 7b visualizes the C-spaces just for the sensor lens considering two different sensor orientations.
To assess the validity of the characterized C-spaces, we selected eight extreme sensor poses lying at the vertices of each C-space manifold s )} ∈ C 1 and computed their corresponding frustum spaces I s (p s,1 ), . . ., I s (p s,8 ).Our simulations confirmed that the feature f 0 lay within the frustum space for all extreme sensor poses, hence satisfying the core viewpoint condition 14.Some exemplary extreme sensor poses and their corresponding I-space are shown in the Figures 7. The rest of the renders, manifolds of the C-spaces, frames, and object meshes for this example and following examples can be found in the digital supplement material of this paper.
As expected, the computational efficiency for characterizing one C-space showed a good performance with an average computation time (30 repetitions) of 4.1 ms and a standard deviation of σ = 2.4 ms.The computation steps comprise a read operation of the vertices (hard-coded) of the frustum space as well as the required reflection and transformation operations of a manifold with eight vertices.
6) Summary: This subsection outlined the formulation and characterization of the fundamental C-space C 1 , which is characterized based on the sensor imaging parameters, the feature position, and a fixed sensor orientation.Using an academic example, we demonstrated that any sensor pose (fix orientation) within C 1 was valid to acquire the regarded feature satisfying the imaging sensor constraints.Moreover, two different strategies were proposed to efficiently characterize such a topological space based on fundamental geometric analysis.
The formulations and characterization strategies introduced in this subsection are considered the backbone of our framework.The potential and benefits of the core C-space C 1 are exploited within the following subsection and Section VII to regard the integration of further viewpoint constraints.

B. Range of Orientations
In the previous subsection, the formulation of C-spaces for a fixed sensor orientation was introduced.Based on this formulation, this subsection outlines the formulation of a topological space in the special Euclidean SE(3), which allows a variation of the sensor orientation.
Extreme Viewpoint Formulation: Characterization of C-space C 1 by positioning all vertices of I s at the feature's frame by reflecting all vertices of I s around the feature's frame B f .Fig. 6: Geometrical characterization of the C-space C 1 using the frustum space with two different approaches.
1) Motivation: Within the scope of our work, taking into account applications that comprise an a priori model of the object and its features and the problem simplification addressed in Subsection VI-A2, we consider it unreasonable and inefficient to span a configuration space that comprises all orientations in R s ⊆ SO(3).This assumption can be confirmed by observing Figure 7b, which demonstrates that the topological space, which will allow sensor rotations with an incidence angle of −25 • and 30 • , does not exist.
For this reason, we consider it more practical to span a configuration space, which comprises a minimal and maximal sensor orientation range r min s ≤ r s ≤ r max s ∈ R f s instead of an unlimited space with all possible sensor orientations.The minimal and maximal orientation values can be defined considering the sensor limitations given by the second viewpoint constraint.
2) Formulation: First, consider the range of sensor orientations with and let the C-space for a single orientation as given by Equation 15 be extended as follows: The topological space, which considers a range of sensor orientations, denoted by C 2 (R s ) can be seamlessly computed by intersecting the individual configuration spaces for each m orientation: 3) Characterization: The C-space C 2 (R s ) as given by Equation 18 can be seamlessly computed using CSG Boolean Intersection operations.Considering that each intersection operation yields a new manifold with more vertices and edges, it is well known that the computation complexity of CSG operations increases with the number of vertices and edges.Thus, to compute C 2 (R s ) in a feasible time, a discretization of the orientation range must be first considered.
One simple and pragmatic solution is to consider different sensor orientations comprising the maximal and minimal allowed sensor orientations, e.g., (r • } ∈ β y s using a discretization step of r d s = 10 • for the positioning frames TCP T CP C 2 (R s ) and sensor lens frame s1 C 2 (R s ).The C 2 manifolds are characterized by intersecting all individual spaces C 1 as given by Equation 18.
As it can be observed from Fig. 8b and Fig. 8c, it should be noted that the manifolds s1 C 2 and T CP C 2 span different topological spaces (here in SE(1)) depending on the selected positioning frame.Contrary to the C-spaces s1 C 1 and T CP C 1 considering a fixed sensor position, the selection of the reference positioning frame should be considered before computing C 2 and taking into account the explicit requirements and constraints of the vision task.
Discretization without Interpolation Note that C 2 (R s ) spans a topological space that is just valid for the sensor (19) This characteristic can particularly be appreciated in the top of the s1 C 2 (r min s , r max s ) manifold in Figure 8a.Thus, it should be kept in mind that the constrained space C 2 (R s ) does not allow an explicit interpolation within the orientations of R s .
Approximation of C 2 However, as it can be observed from Figure 8 the topological space spanned considering a step size of 10 • , C 2 (R s (r d s = 10 • )), is almost identical to the space if we would relax the step size to 20 • , C 2 (R s (r d s = 20 • )).Hence, it can be assumed for this case that the C-spaces are almost identical and the following condition will hold: 4) Verification: For verification purposes, we consider an academic example with the following sensor orientation ranges: γ x s = {−5, 0, 5} • , β y s = {−5, 0, 5} • , and α z s = {−10, 0, 10} • .The resulting C-space, C 2 (R s ), were computed by intersecting the constrained space C 1 (f 0 , I s , f r s ) for each possible sensor orientation combination, i.e., 3 3 = 27 sensor orientations (cf.Subsection VI-A).The computation time correspond to t(C 2 (R s )) = 15 s. Figure 9 visualizes the 6D manifold of the C-space obtained through Boolean intersection operations.Additionally, the ideal constrained space considering a null rotation is also displayed to show a qualitative comparison of the reduction of the C-space considering a rotation space in SE(3).
For verifying the validity of the computed manifold, four extreme viewpoints and their corresponding frustum spaces are displayed in Figure 9 considering the following random orientations: Note that while the first two viewpoints consider an explicit orientation within the given orientation range { f r s,1 , f r s,2 } ∈ R s , the sensor orientation of the third and fourth viewpoints are not elements of { f r s,3 , f r s,4 } / ∈ R s , however lie within the interpolation range.The frustum spaces prove that all viewpoints can capture f satisfactorily.Although the sensor poses p s,3 and p s,4 show to be valid in this case, this assumption cannot be guaranteed for any other arbitrary orientation.Nevertheless, this confirms that the approximation for different positioning frames with following range of sensor orientations {−20 • , −10 Verification of two different viewpoints using sensor TCP as condition as given by Equation 20holds to some extent.
To provide a more quantifiable evaluation of this approximation, the constrained space, C 2 , considering finer discretization steps r d s of 2.
)) = 0.9995.These experiments show that the differences between the manifold volume ratios for the selected steps are insignificant and that the approximation with a step of r d s = 2.5 • holds for this case.

5) General Notes:
It is important to note that the validity of the approximation introduced by Equation 20 must be individually assessed for each individual application, imaging parameters, and other constraints.Some preliminary experiments showed that when considering further viewpoint constraints that depend on the sensor orientation, e.g., the feature geometry, see Subsection VII-A, the differences between the spaces using different discretization steps may be more considerable.A more comprehensive analysis falls outside the scope of this study and remains to be further investigated.We urge the reader to perform some empirical experiments for choosing an adequate discretization step and a good trade-off between accuracy and efficiency.
6) Summary: Contrary to the previously introduced C-space C 1 which is limited to a fixed sensor orientation, this subsection outlined the formulation and characterization of the C-space C 2 in SE(3) that satisfies the sensor imaging parameters for different sensor orientations.We demonstrated that the manifold C 2 is straightforwardly characterized by intersecting multiple C-spaces with different sensor orientations.

VII. C-SPACES OF REMAINING VIEWPOINT CONSTRAINTS
Finding a simple constraint formulation might sometimes be considerably more challenging than solving the overall problem [53].Hence, posing the VGP as a Constraint Satisfaction Problem requires a comprehensive, individual, and compatible formulation of each viewpoint constraint.While some of the related works have proposed binary coverage functions to assess the validity of each constraint for each viewpoint, we opt to exploit the concept of C-spaces to solve this problem geometrically using linear algebra, trigonometry, and geometric analysis.
While Section VI introduced the core constraints based on the frustum space, sensor orientation, and feature position, this section outlines an individual formulation and characterization of the remaining viewpoint constraints (see Table I).Moreover, Subsection VII-G presents one possible strategy to integrate all C-spaces demonstrating the advantages of a consistent and modular characterization.
The formulations presented in this section are motivated by the general requirements (cf.Table IV) that aim to deliver a high generalization of the models to facilitate their use with different RVSs and vision tasks.Hence, whenever possible, the characterization of some constraints using simple scalar arithmetic is prioritized over more complex techniques, and simplifications are introduced for the benefit of pragmatism, efficiency, and generalization of the approaches considered.

A. Feature Geometry
In many applications, the feature geometry is a fundamental viewpoint constraint that may considerably limit the space in SE(3) for positioning the sensor.This subsection shows that the required C-space affected by the feature geometry can be efficiently and explicitly characterized using trigonometric relationships that depend on the feature geometry, the sensor's FOV angles, and the sensor orientation.III-H), it can be assumed that all feature surface points G f (L f ) must be acquired simultaneously.The C-space that fulfills this requirement can be easily formulated by extending the base constraint of C 1 as given by Equation 15, considering that all surface points must lie within the frustum space: 2) Generic Characterization: Taking into account the generic formulation of the C-space C 3 from Equation 21, in the simplest case, it can be assumed that C 3 could be obtained by scaling C 1 .Let the required scaling vector be denoted by ∆(r f ix s , L f , θ x s , ψ y s ) and depend on the feature geometry, the sensor rotation and the FOV angles of the sensor.The generic characterization of the C 3 manifold can then be expressed as follows Recalling that the C 1 manifold is not symmetrical in all planes, hence, rotation variant, assume that the C 3 manifold cannot be correctly scaled regarding the same scaling vector.Thus, a more generic and flexible approach can then be considered by letting each k vertex of ∀ V considering the following generalized vector: The explicit characterization of the scaling vector from Equation 23 requires an individual and comprehensive trigonometric analysis of each k vertex of C 3 , which depends on the chosen sensor orientation.Moreover, since the scaling vector ∆ k depends also on the feature geometrical properties, from now on we will assume the generalization of the feature geometry as introduced in Subsection III-D3 to characterize any feature by a square of the length l f .This simplification contributes to a higher generalization of our models for different topologies and facilitates the comprehension of the trigonometric relationships introduced in the following subsections.
3) Characterization of the C-space with null rotation: The most straightforward scenario to quantify the influence of the feature geometry on the constrained space is first to consider a null rotation, f r 0 s , of the sensor relative to the feature, i.e., the feature's plane is parallel to the xy-plane of the TCP and the rotation around the optical axis equals zero, f r 0 s (γ x s = β y s = α z s = 0).First, span the core constrained space, C 1 , considering the feature position and the null rotation of the sensor.Then, parting from one vertex of C 1 , let the I-space be translated in one direction until I s entirely encloses the whole feature.This step is exemplary shown in the x-z plane in Figure 10 at the third vertex of C 1 and can be interpreted as an analogy to the Extreme Viewpoints Interpretation (cf.Subsection VI-A4) used to span C 1 .Then, it is easily understood that to characterize be shifted in the x and y directions by a factor of 0.5 • l f : 4) Rotation around one axis: Any other sensor orientation different from the null orientation requires an individual analysis of the exact trigonometric relationships for each vertex.To break down the complexity of this problem, within this subsection, we first provide the geometrical relationships needed to characterize the constrained space C 3 considering an individual rotation around each axis.The characterization of the constrained spaces follows the same approach described in the previous subsection, which requires first the characterization of the base constraint, C 1 , and then the derivation of the scaling vectors.
• Rotation around z-axis α z s ̸ = 0: Assuming a sensor rotation around the optical axis, f r z s (α z s ̸ = 0, φ s (β y s , γ x s ) = 0) (see Figure 11), the C-space is scaled just along the vertical and horizontal axes, using the Fig. 10: Characterization of the C-space C 3 considering a null rotation f r 0 s over the feature f : scale all vertices of C 1 in the x and y axes considering the feature geometric length of 0.5 • l f .following scaling factors: The derivation of the trigonometric relationships from Equation 25 can be better understood by looking at Figure 29.
• Rotation around x-axis or y-axis (γ x s ̸ = 0⊻β y s ̸ = 0): A rotation of the sensor around the xaxis, r s (γ x s ̸ = 0, α z s = β y s = 0), or y-axis, r s (β y s ̸ = 0, α z s = γ x s = 0), requires deriving individual trigonometric relationships for each vertex of C 3 .Besides the feature length, other parameters such as the FOV angles (θ x s , ψ y s ) and the direction of the rotation must be considered.
The scaling factors for the eight vertices of the C-space considering a rotation around the x-Axis or y-Axis can be found in Table VIII regarding the following general auxiliary Fig. 11: Characterization of the vertices of the C-space, C 3 , considering a rotation around the optical axis f r z s (α z s ̸ = 0, φ s (β y s , γ x s ) = 0).lengths for f r s (α z s = γ x s = 0, β y s ̸ = 0): and for f r s (α z s = β y s = 0, γ x s ̸ = 0): The derivation of the trigonometric relationships can be better understood using an exemplary case.Thus, first assume a rotation of the sensor around the y-axis of β y s > 0 ∧ γ x s = 0, as illustrated in Figure 12.The trigonometric relationships can then be derived for each vertex following the Extreme Viewpoints Interpretation as exemplary depicted in Figure 30).
Fig. 12: Characterization of the vertices of the C-space, C 3 , considering a rotation around the y-axis f r y s (γ x s = α z s = 0, β y s < 0).

5) Generalization to 3D Features:
Although our approach contemplates primary 2D features, the constrained space C 3 can be seamlessly extended to acquire 3D features considering a feature height h f ∈ L f .This paper just considers the characterization of the scaling vectors for concave (e.g., pocket, slot) and convex (e.g., cube, half-sphere) features with a null rotation of the sensor, f r 0 s .For instance, the back vertices (k = 1, 2, 5, 6) of C 3 to capture a concave feature , as shown in Figure 13, must be scaled using the following factors: k =h f .The front vertices are scaled using the same factors as for a 2D feature as given by Equation 24.For convex features, let all vertices be scaled with the factors given by Equation 26, except for the depth delta factor of the back vertices, which follows ∆ z k = 0. Note that the characterization of the C-space, C 3 , for considering 3D features just guarantees that the entire feature surface lies within the frustum space.We neglect any further visibility constraints that may influence the viewpoint's validity, such as the maximal angles for the interiors of a concave feature.Moreover, it should be noted that the scaling factors given within this subsection hold for just a null rotation.The characterization of the scaling factors for other sensors orientations can also be derived by extending the previously introduced relationships for 2D features in Subsection VII-A4 .
6) Verification: The verification of the geometrical relationships introduced within this subsection was performed based Fig. 13: Characterization of the vertices of the C-space, C 3 , considering a 3D feature and null rotation f r 0 s .
Figure 14 visualizes the scene comprising the C-spaces for acquiring f 1 and f * 1 .All C 3,1−3 manifolds were computed by scaling first the manifold of the frustum space considering the scaling factors addressed within the past subsections and then by reflecting and transforming the manifold with the corresponding sensor orientation (for C 3,1 see Table VIII To verify the geometrical relationships introduced within this subsection, a virtual camera using the trimesh Library [58] and the imaging parameters of s 1 was created.Then, the depth images and their corresponding point clouds at eight extreme viewpoints, i.e., the manifold vertices, were rendered to verify that the features could be acquired from each viewpoint.The images and point clouds of all extreme viewpoints confirm that the features lie at the border of the frustum space and can be entirely captured.Figures 15a-15c demonstrate this empirically and show the depth images and point clouds at the selected extreme viewpoints ( f1 p s,1 , f1 p s,2 , f * 1 p s,3 ) from Figure 14.Our approach provides an analytical and straightforward solution for efficiently characterizing the C-space limited by the frustum space and feature geometry.Since the delta factors can be applied directly to the vertices of the frustum space, the computational cost is similarly efficient to the computation C 3,1 ( f 1 r s,1 ) 7) Summary and Discussion: This subsection extended the formulation of the core C-space C 1 (see Section VI) to a C-space C 3 taking into account the feature's geometry dimensions.Using an exhaustive trigonometric analysis, our study introduced the exact relationships to characterize the vertices of the required multi-dimensional manifold considering a generalized model of the feature geometry, the sensor's orientation, and its FOV angles.Our findings show that the characterization of the C-space constrained by the feature geometry can be computed with high efficiency using an analytical approach.
The trigonometric relationships introduced in this section are sufficient to characterize the C-space manifold taking into account the rotation of the sensor in one axis.Characterizing the explicit relationships for a simultaneous rotation of the sensor in all axes is beyond the scope of this paper.However, the trigonometric relationships and general approach presented in this subsection can be used as the basis for their derivation.

B. Constrained Spaces using Scaling Vectors
If a viewpoint constraint can be formulated using scaling vectors, as suggested for the feature geometry in Subsection VII-A, then the same approach can be equally applied to characterize the constrained space of different viewpoint constraints.This subsection introduces a generic formulation for integrating such viewpoint constraints and proposes characterizing kinematic errors and the sensor accuracy following this approach.
1) Generic Formulation : If the influence of an i viewpoint constraint c i ∈ C can be characterized by a scaling vector ∆(c i ) to span its corresponding constrained space, C i , then its vertices V C i can be scaled using a generalized formulation of Equation 22: Integrating Multiple Constraints: The characterization of a jointed constrained space, which integrates several viewpoint constraints, can be computed using different approaches.On the one hand, the constrained spaces can be first computed and intersected iteratively using CSG operations, as originally proposed in Equation 12.However, if the space spanned by such viewpoint constraints can be formulated according to Equation 27, the characterization of the constrained space C can be more efficiently calculated by simply adding all scaling vectors: While the computational cost of CSG operations is at least proportional to the number of vertices between two surface models, note that the complexity of the sum of Equation 28is just proportional to the number of viewpoint constraints.
2) Compatible Constraints: Within this subsection, we propose further possible viewpoint constraints that can be characterized according to the scaling formulation introduced by Equation 28.
• Kinematic errors: Considering the fourth viewpoint constraint and the assumptions addressed in Subsection III-H, the maximal kinematic error ϵ is given by the sum of the alignment error ϵ e , the modeling error of the sensor imaging parameters ϵ s , and the absolute position accuracy of the robot ϵ r : Assuming that the total kinematic error has the same magnitude in all directions, all vertices can be equally scaled.The vertices of the C-space C 4 (ϵ) are computed using the scaling vector ∆(ϵ): • Sensor Accuracy: If the accuracy of the sensor a s can be quantified within the sensor frustum, then similarly to the kinematic error, the manifold of the C-space C 5 (a s ) can be characterized using a scaling vector ∆(a s ): 31 visualizes an exemplary and more complex scenario comprising an individual scaling of each vertex.This example shows the high flexibility and adaptability of this approach for synthesizing C-spaces for different viewpoint constraints according to the particular necessities of the application in consideration.
3) Summary: The use of scaling vectors is an efficient and flexible approach to characterize the C-space spanned by any viewpoint constraint.Within this section, we considered a few viewpoint constraints that can be modeled aligned to this formulation However, it should be noted that this approach requires an explicit characterization of the individual scaling (a) Rendered scene at extreme viewpoint: Fig. 15: Rendered scenes at the extreme viewpoints f1 p s,1 , f1 p s,2 , and f * 1 p s,3 for verifying that the whole feature geometry lies entirely within the corresponding frustum spaces.Each figure displays the resulting frustum space (manifold with green edges), corresponding rendering point cloud (green points), and depth image (2D image in color map) at each extreme viewpoint.
vectors and the overall characterization may be limited by the number of vertices of the base C-space C 1 (I s ).For instance, our model of the I-space considers just eight vertices.Thus, any viewpoint constraint that requires a more complex geometrical representation could be limited by this.
Moreover, note that if the regarded viewpoint constraint can be explicitly characterized as a manifold in SE(3), this C-space can then be directly intersected with the rest of the constraints as originally suggested by Eq. 12, such an example is given for characterizing the robot workspace (see Subsection VII-F1).However, note that such an approach is generally computationally more expensive; hence, this study recommends that the characterization of viewpoint constraints using scaling vectors be prioritized whenever possible.

C. Occlusion Space
We consider the occlusion-free view of a feature (related terms: shadow or bistatic effect) a non-negligible requirement that must be individually assessed for each viewpoint.In the context of our framework, this subsection outlines the formulation of a negative topological space -the occlusion space C occl 6 -to ensure the free visibility of each viewpoint within the C-space.Although other authors [15,18,26] have already suggested the characterization of such spaces, the present study proposes a new formulation of such a space aligned to our framework.Our approach strives for an efficient characterization of C occl 6 using simplifications about the feature's geometry and the occlusion bodies.
1) Formulation: If a feature f is not visible from at least one sensor pose within the C-space, it can be assumed that at least one rigid body of the RVS is impeding its visibility.Thus, an occlusion space for f denoted as C occl 6 ⊂ SE(3) exists and a valid sensor pose cannot be an element of it p s / ∈ C occl 6 .However, it is well known that the characterization of such occlusion spaces can generally be computationally expensive.Therefore, it seems inefficient to formulate C occl 6 in the special Euclidean for all possible sensor orientations.For this reason, and by exploiting the available C-space spanned by other viewpoint constraints, C occl 6 can be formulated based on the previously generated C-space, a given sensor orientation r f ix s , and the surface models of the rigid bodies κ ∈ K: Contrary to all other viewpoint constraints formulations (see Equation 11), it must be assumed that the occlusion space is not a subset of the C-space C occl 6 ⊈ C .Hence, let Equation 12 be reformulated as the set difference of the C-spaces of other viewpoint constraints and the occlusion space: If the resulting constrained space results being a non-empty set, C ̸ = ∅, there exists at least one valid sensor pose for the selected sensor orientation with occlusion-free visibility to the feature f .
2) Characterization: The present work proposes a strategy to compute C occl 6 , which is broken down into the following general steps.In the first step, the smallest possible number of view rays is computed for detecting potential occlusions.In the second step, by means of ray-casting techniques 5 , view rays are tested for occlusion against all rigid bodies of the RVS.Then, the occlusion space is characterized using a simple surface reconstruction method using the colliding points of the rigid bodies and some further auxiliary points.In the last step, the occlusion space is integrated with the C-space spanned by other viewpoint constraints as given by Eq. 30.
Algorithm 3 describes more comprehensively all steps to characterize the occlusion space C occl 6 and Figure 16 provides an overview of the workflow and visualization of the expected results of the most significant steps considering an exemplary occluding object κ 1 .A more comprehensive description for the characterization of the view rays using the C-space characterized by other viewpoint constraints can be found in the Appendix 7 3) Verification: For verification purposes, we consider an academic example, which comprises an icosahedron occluding the sight of feature f 1 .The dimensions and location of the occluding object κ 1 are described in Table VII.Figure 32 displays the related scene and the manifolds of the computed occlusion space and corresponding occlusion-free space.In the first step, the C-space for feature f 1 considering its geometry, i.e., C 3 , and the following sensor orientation: f1 r s (α z s = γ x s = 0, β y s = 15 • ) was characterized.In the second step, the manifold of the occlusion space, C occl 6 , was synthesized following the steps described in Algorithm 3. A discretization step of d ς = 0.5 • was selected for computing the view rays ς g f,c .
Figure 17 shows the rendered point cloud and range image at one extreme viewpoint within the resulting occlusion-free space.The rendered point cloud and image confirm that although the occluding body lies within the frustum space of the viewpoint, the feature and its entire geometry can still be completely captured.As expected, the computation of the collision points using ray casting was the most computationally expensive step with a total time of t(q occl,κ f,c ) = 0.7 s, the total time required for characterizing the occlusion-free space corresponded to t(C occl,κ 6 ) = 1.32 s. 5 For the interested reader, we refer to Roth [62] and Glassner [63] for a comprehensive overview of ray-casting techniques.
Algorithm 3 Characterization of the occlusion space C occl 6 1) Compute a set of view rays ς g f,c (m, n) ∈ Σ for each surface point g f,c using a set of direction vectors σ m,n : The direction vectors span a m × n grid of equidistant rays with a discretization step size d ς .The aperture angles of the view rays correspond to the maximal aperture of a previously characterized C-space C .
2) Test all view rays, ∀ ς g f,c (m, n) ∈ Σ, for occlusion against each rigid body κ ∈ K using ray casting.Let the collision points at the rigid bodies be denoted as: 3) Shoot an occlusion ray, ς occl,κ g f,c , from each surface point g f,c to all occluding points of the set ∀ q occl,κ f ∈ Σ occl,κ considering that this must lie beyond the constrained space.Let these points be elements of the set . The convex hull corresponds to the manifold of C occl,κ ,6 : 6) Compute the occlusion space, C occl,κ

6
, for all rigid bodies, ∀κ ∈ K repeating Steps 2 until 5. 7) The occlusion space for all rigid bodies corresponds to the CSG Boolean Union operation of all individual occluding spaces: 8) The occlusion space is integrated with the C-space spanned by other viewpoint constraints using a CSG Boolean Difference operation: 4) Summary and Discussion: Within this subsection, a strategy that combines ray-casting and CSG Boolean techniques to compute an occlusion space C occl 6 was introduced.The present study showed that the C occl 6 manifold can be thoroughly integrated with the C-space spanned by other viewpoint constraints complying with the framework proposed within this publication.Moreover, to enhance the efficiency of the proposed strategy, we considered a simplification of the feature geometry, discretization of the viewpoint space, and the use of ) are computed for each c surface point using the limits of the C-space C 3 .
Step 2: The occluding points (q occl,κ f ) of a rigid body are found using ray casting.
Step 3: The occlusion rays (ς occl,κ g f,c ) are computed between each c surface point and all occlusion points (q occl,κ f ).
Step 4: Select an additional point ( * q occl,k f ) at each occlusion ray beyond the constrained space.* q occl,κ f Step 5: The occlusion space is characterized using the convex hull spanned by all colliding points and the additional points lying at the occlusion rays.Due to the mentioned simplifications, it should be kept in mind that contrary to the other C-spaces, the C occl 6 manifold does not represent the explicit and real occlusion space and should be treated as an approximation of it.For example, a significant source of error regarding the accurate identification of all occluding points is the chosen discretizing step size for the computations of the view rays.This effect is known within ray-casting applications and can also be observed in Figure 16a, where the right corner point of the colliding body is missed.Within the context of the present study, comprising robot systems, we assume that such simplifications can be safely taken into consideration if the absolute position accuracy of the robot is considered for the selection of the step size d ς .For example, assuming a conservative absolute accuracy of a robot of 1 mm and a minimum working distance of 200 mm, it is reasonable to choose a step size of d ς ≈ 0.3 • , using the arc length formula (1 mm/200 mm) = 0.005rad.Alternatively, more robust solutions can be achieved by scaling the occlusion space with a safety factor.Moreover, special attention must be given when computing the occlusion space as suggested in Step 5 of Algorithm 3 for rigid bodies with hollow cavities, e.g., a torus.It should be expected that the result of the occlusion space will be more conservative.A more precise characterization of the occlusion space falls outside the scope of this paper and could be achieved using more sophisticated surface reconstruction algorithms [64,65].
Finally, it is important to mention that the characterization of the occlusion space may lead to a non-watertight manifold, which may complicate the further processing of the jointed C-space.Thus, we recommend computing the occlusion space as the last viewpoint constraint.

D. Multisensor
Considering the intrinsic nature of a range sensor, in its minimal configuration, two imaging devices (two cameras or one camera and one active projector) are necessary to acquire a range image.Therefore, a range sensor can be regarded as a multisensor system.Up to this point, it had been assumed that the C-space of a range sensor could be characterized using just one frustum space I s and that the resulting C-space integrates the imaging parameters and any other viewpoint constraints of all imaging devices of a range sensor.On the one hand, some of the previous introduced formulations and most of the related work demonstrated that this simplification is in many cases sufficient for computing valid viewpoints.On the other hand, this assumption could also be regarded as restrictive and invalid for some viewpoint constraints.For example, the characterization of the occlusion-free space as described in Subsection VII-C, will not guarantee a free-sight to both imaging devices of a range sensor.
For this reason, our study assumes that each imaging device can have individual and independent viewpoint constraints.As a result, we consider that an individual C-space can be spanned for each imaging device.Furthermore, this section outlines a generic strategy to merge the individual C-spaces of multiple imaging devices to span a common C-space that satisfies all viewpoint constraints simultaneously.
To the best of our knowledge, none of the related work regarded viewpoint constraints for the individual imaging devices of a range sensor or for multisensor systems.In Section VIII, the scalability and generality of our approach considering a multisensor system is demonstrated with two different range sensors.
1) Formulation: Our formulation is based on the idea that each imaging device can be modeled independently and that all devices must simultaneously fulfill all viewpoint constraints.First, considering the most straightforward configuration of a sensor with a set of two imaging devices {s 1 , s 2 } ∈ S, we can assume to have two different sensors with two different frustum spaces I s1 = I 0 ( s1 p s , I s1 ) and I s2 = I 0 ( s2 p s , I s2 ) (cf.Subsection III-E2).Thus, aligned to the formulation from Subsection VI-A, the core C-space for each sensor can be expressed as follows: In a more generic definition that considers all viewpoint constraints of s 1 or s 2 the C-space is denoted as Considering that the rigid transformation s1 s2 T s between the two imaging devices of a sensor is known, it can be assumed that a valid sensor pose for the first imaging device exists, if and only if the second imaging device also lies within in its corresponding C-space simultaneously.The formulation of such condition follows supposing that the reference positioning frame for the sensor is the first imaging device s 1 .
If the condition from Eq. 31 is valid, there must exist a C-space for s 1 that integrates the viewpoint constraints of both lenses being denoted as C s1. 2  7 , which formulation follows: The joint space can then be integrated with the C-space for s 1 employing a Boolean intersection: A more generic formulation of the C-space of s 1 being constrained by all imaging devices s t ∈ S is given: Analogously, C S2 denotes the space for positioning imaging device s 2 including the constraints of s 1 .
Algorithm 4 Characterization of C-space C S1 to integrate viewpoint constraints of a second imaging device s 2 .
1) Compute the C-space of the first device considering a fixed orientation s1 r f ix s and any further viewpoint constraints C s1 .
2) Compute the C-space for the second imaging device taking into account any viewpoint constraints and the previously defined orientation of the first imaging device using the rigid orientation between both devices 3) Compute the sensor pose that the first device assumes when computing C s2 using the rigid transformation between both devices: Duplicate the manifold of C s2 and translate it to the position vector of s1 p corresponds to this translated manifold: with the C-space of the first imaging device using a Boolean Intersection operation: .
2) Characterization: The C-space C S1 , which comprises all constraints of all imaging devices s t ∈ S, can be straightforwardly characterized following the five simple steps given in Algorithm 4. Figure 18 visualizes the interim manifolds at each step to ultimately characterize the manifold C S1 .Finally, Figure 18f shows an exemplary extreme viewpoint, where both imaging devices frames are within their respective C-spaces  and C s 2 yields the space C S 1 for sensor s 1 , which integrates all viewpoint constraints of s 2 .
(f) Repeat Steps 1-5 for the second imaging device or duplicate and translate manifold C S 1 for computing C S 2 .
Fig. 18: Overview of the computation steps of Algorithm 4 to characterize the C-space C S1 that integrates viewpoint constraints of the two imaging devices (s 1 , s 2 ) from a range sensor.
s1 p s ∈ C S1 and s2 p s ∈ C S2 and the feature geometry lies within both frustum spaces.
The space C S2 for the second imaging device can be computed analogously following the same steps.However, the topology of the resulting C S2 will be identical to C S1 .Hence, instead of repeating the steps described in Algorithm 4 for a second imaging device, a more efficient alternative is to translate the manifold of C S1 to the position of s 2 at s2 p s ( s2 t s = B f ) using the rigid translation Note that if two imaging devices are the same orientation, the resulting C-spaces are identical, hence C S1 = C S2 .
3) Verification: The joint constrained spaces C S1 and C S2 for s 1 and s 2 were computed according to the steps provided in Algorithm 2 for acquiring feature f 1 .The imaging parameters of both sensors and rigid transformation between them are given in Table II.
First, the C-spaces of both sensors were spanned considering its imaging parameters and a null orientation of the first sensor, i.e., f r s,1 (α z s = β y s = γ x s = 0 • ).The individual constrained spaces were computed for each sensor considering the constrained space affected by the feature geometry, i.e., C s1 3 and C s2 3 .Since the frustum space of the second sensor always lies within the first one, we additionally considered a fictitious accuracy constraint for the depth of the second sensor, C s2 5 (a s2 (z min = 500 mm, z max = 700 mm)), to limit Fig  its working distance.Figure 19 visualizes the described scene and resulting manifolds of the constrained spaces.Figure 20 displays the frustum spaces, rendered depth images of both sensors, and the resulting point cloud at an extreme viewpoint.The rendered images provide a visual verification of our approach demonstrating that f 1 is visible from both sensors.Note that the second device represents an active structured light projector (see in Table II.) The total computation time of the constrained space was estimated at 200 ms.The computation time depended mainly on the intersection from Step 5, which corresponded to 100 ms.The computation results just apply to this case.Such analyses are difficult to generalize since the complexity of Boolean operations depends decisively on the number of vertices of the manifolds of the occluding objects.
4) Summary : This subsection introduced the formulation and characterization of a C-space, which considers the intrinsic configuration of range sensors comprising at least two imaging devices.Our approach enables the combination of individual viewpoint constraints for each device and the characterization of a C-space, which fulfills simultaneously all viewpoint constraints from all imaging devices.
The strategy proposed in Algorithm 4 demonstrated to be valid and efficient to characterize the manifolds of such C-space.Nevertheless, we do not dispose alternative approaches for its characterization.For instance, if the frustum spaces are intersected initially, the resulting frustum space can be used as the base for spanning the rest of the constraints.However, we consider the steps proposed within this subsection more traceable, modular, and extendable to consider further constraints, multisensor systems, or even transferable to similar problems.For example, a variation of the algorithm could be applied for maximizing or guaranteeing the registration space between two different viewpoints, which represents a fundamental challenge within many vision applications [66].

E. Robot Workspace
This section outlines the formulation of the robot workspace as a further viewpoint constraint to be seamlessly and consistently integrated with the other C-spaces.
1) Formulation: A viewpoint can be considered valid if a sensor pose is reachable by the robot: hence, it lies within the robot workspace p s ∈ W r .The formulation of a constraint can then be straightforwardly formulated as follows: 2) Characterization and Verification: In our work, we assume that the robot workspace is known and can be characterized by a manifold in the special Euclidean.Considering this assumption, the constrained space C 8 can be seamlessly intersected with the rest of the viewpoint constraints.Figure 21 shows an exemplary scene for acquiring feature f 1 and the resulting constrained manifold C 3 C 8 , which considers a robot with a workspace of a half-sphere and a working distance of 1000 mm-1800 mm and the C-space manifold C 3 spanned by the feature geometry.
3) Discussion: A more comprehensive formulation and characterization of the robot workspace to consider singularities requires more detailed modeling of the robot kinematics.Furthermore, our study has not considered the explicit characterization of the collision-free space of a robot, which had been the focus of exhaustive research in the last three decades.We assume that an explicit proof for collision must be performed in the last step for a selected sensor pose within the C-space.Nevertheless, we consider that our approach contributes substantially to a significant problem simplification by delimiting the search space to compute collision-free robot joint configurations more efficiently.

F. Multi-Feature Spaces
Up to this point, our work has outlined the formulation and characterization of C-spaces to acquire just one feature.Within this subsection we briefly outline the characterization of a C-space, F C , which allows capturing a set of features F and the simultaneous fulfillment of all viewpoint constraints from all features f m ∈ F with m = 1, . . ., n.
1) Characterization: The characterization of F C can be seamless achieved according to the two steps described in Algorithm 5.In the first step, the C-space for all n features, fm C , are characterized considering a fixed sensor orientation r f ix s and the individual constraints C(f m ).Then the constrained space F C is synthesized by intersecting all individual constrained spaces.Figure 22 shows the characterization of such a space for the acquisition of two features.
2) Verification: To verify the proposed characterization of a constrained space for acquiring multiple features, we outlined an exemplary use case comprising two features {f 1 , f 2, } ∈ F (see Table VII).We computed the space, F C , according to the steps of Algorithm 5, considering an orientation of f1 r s (α z s = β y s = 0 • , γ x s = −10 • , ) relative to the feature f 1 .Figure 33 shows the described scene and visualizes the resulting joint space.The rendered scene and range images of Figure 23 confirm the validity of the F C , demonstrating that both features can be simultaneously acquired at two extreme viewpoints within this space.
3) Summary and Discussion: Within this subsection, we demonstrated that C-spaces from different features can be seamlessly combined to span a topological space that guarantees the acquisition of these features satisfying simultaneously the individual feature viewpoint constraints.
The current study assumes that the sensor orientation can be arbitrarily chosen and that the features can be acquired jointly by the sensor.In most applications, such assumptions cannot always be met and the following fundamental question arises: which features can be acquired simultaneously and which is an adequate sensor orientation?This questions fall outside the scope of this paper and yields the motivation of our ongoing research, which addresses the efficient combination of C-spaces to tackle the superordinated VPP.

G. Constraints Integration Strategy
The integration of viewpoint constraints can be considered to be commutative, i.e., the order of computation and integration of the constraints do not affect the characterization of the final constrained space.However, due to the diverse computation techniques that our our framework considers, a well-thoughtout strategy may contribute to increase the computational efficiency of the overall process.In this publication, we outline one possible and simple strategy described in Algorithm 6 to integrate all viewpoint constraints into a single C-space.
The optimal integration of constraints falls outside the scope of this publication.Moreover, we consider that an optimal and efficient strategy can be tailored just considering the individual application and its specific constraints.
Algorithm 6 Strategy for the integration of viewpoint constraints.
1) Consider a fixed sensor orientation s1 r f ix s for the reference imaging device s 1 .
2) Compute the C-spaces manifolds 3) Compute the C-space of all features for sensor s t : 4) Repeat Steps 1-3 for all imaging devices ∀s t ∈ S.
5) Compute the C-space for all u imaging devices, e.g., for s 1 : .
6) Intersect the robot workspace to obtain the final C-space b, e.g., for s 1 :

VIII. EVALUATION
Within this section, a comprehensive evaluation of the constraint formulations and their integration is undertaken.First, Subsection VIII-A verifies the formulations of all regarded viewpoint constraints of our work based on an academic example.In Subsection VIII-B, the framework's broad generality and applicability is evaluated within an industrial RVS comprising two different sensors.

A. Academic Simulation-Based Analysis
This subsection presents a simple but thorough academic use case that considers all introduced viewpoint constraints to perform an exhaustive evaluation of the presented formulations.As stated in the introduction, with this present scenario we aim to provide a first draft of a much-needed benchmark for other researchers that can be used as basis for further development, reproducibility and comparison.The surface models, the resulting manifolds of the computed C-spaces, and the frustum spaces, can be found attached in the additional material of our publication.
1) Use-Case Description: The exemplary case regards an RVS with two sensors and an object of interest containing three different features with different sizes and geometries.Table III gives a detailed overview of the considered constraints.Besides the imaging parameters of both range sensors, all other parameters can be assumed to be fictitious though realistic.The kinematic and imaging models corresponds to the real RVS depicted in Figure 26.
2) Results: Following the strategy described in Algorithm 6, the joint C-spaces of the four imaging devices, i.e., F C S1 , F C S2 , F C S3 , and F, C S4 were computed for acquiring all features considering all viewpoint constraints from Table III.Figure 24 shows the complexity of the described case comprising three features and some of the resulting C-spaces.The blue manifold F C S1 represents the final constrained space of the first imaging device s 1 .It can be appreciated that F C S1 is characterized by the intersection of all other C-spaces.Moreover, Fig. 24 shows that the F C S1 manifold is mainly constrained by the C-space corresponding to the second range sensor, i.e., F C s1,s3,s4 7 .
To verify the validity of the computed C-spaces, the depth images and point clouds for all imaging devices at eight extreme viewpoints were rendered.Figure 25 shows the corresponding rendered scene and resulting depth images for each imaging device at one extreme viewpoint s1 p s ∈ F C S1 .The depth images demonstrate that all imaging device can successfully acquire all features without occlusion simultaneously.
The total computation time for characterizing all C-spaces corresponded to t( F C S1 , F C S2 , F C S3 , F C S4 ) ≈ 50 s.However, this time comprises other computation steps (e.g., frames transformation and inverse kinematic operations using ROS-Services), which distort the effective computation time of the C-spaces.A proper analysis of the computational efficiency of the whole strategy remains to be further investigated.
3) Summary: Despite the complexity of the use case, the framework (models, methods, and integration strategy) presented within this paper demonstrated its effectiveness in characterize a continuous topological space in the special Euclidean, where all defined viewpoint constraints could be fulfilled.The simulated depth-images and point clouds confirmed that all selected viewpoints within the characterized C-space

B. Real Experimental Analysis
To assess the usability and validity of our framework within real applications, the framework presented in this study was utilized to generate automatically valid viewpoints for capturing different features of a car door using real RVS, i.e., the AIBox from ZEISS.The AIBox is an industrial measurement cell used to automate different vision-based quality inspection tasks such as dimensional metrology and digitization, among others.
1) System Description: The AIBox is an integrated industrial RVS, equipped with a structured light sensor (ZEISS COMET PRO AE), a six-axis industrial robot (Fanuc M-20ia), and a rotary table for mounting an inspection object.Moreover, to evaluate the use of our approach considering a multisensor system, we additionally attached a stereo sensor (rc_visard 65, Roboception) to the structured light sensor.The imaging parameters of both sensors are given in Table II.Figure 26 provides an overview of the reconfigured AIBox with the stereo sensor.We assume that the inspection object is roughly aligned, e.g., in Magaña et al. [67] we presented a CNN fully trained on synthetic data to automate this task using the RVS sensor.
2) Vision Task Definition: The validation of our framework was performed on the basis of two vision tasks considering diverse viewpoint constraints.For the first task, we considered just the ZEISS sensor to acquire the features {f 1 , f 2 } ∈ F 1 , which lie on the outside of the door and can be potentially occluded by the door fixture.For the second task, we considered both sensors and the acquisition of two features {f 3 , f 4 , f 5 } ∈ F 2 on the inside of the door.The incidence angle for the first case corresponded to a sensor orientation of o r s2 (α z s = γ x s = 0 • , β y s = −15 • ) and for the second of o r s1 (α z s = β y s = γ x s = 0 • ).To compensate any kinematic modeling uncertainties, we consider an overall kinematic error of ϵ x,y,z s1 = (70.0,70.0, 50.0) mm for s 1 and of ϵ x,y,z s3,4 = (30.0,30.0, 30.0) mm for s 3 and s 4 .3) Results: For both vision tasks we computed the necessary C-spaces aligned to the strategy presented by Algorithm 6.The C-spaces of the first inspection scenario for the camera F1 C S1 and projector F1 C S2 of the Comet Pro AE and its corresponding occlusion spaces are displayed on the left image of Figure 27.To assess the validity of the characterized C-spaces, we chose diverse extreme viewpoints at the vertices of the F1 C S1 manifold and performed real measurements.On the right side of Figure 27 the real monochrome images of the camera and the resulting point clouds at two validating viewpoints are displayed.The 2D images and point clouds prove that both features can be successfully acquired from these both viewpoints, which confirms the free sight for the sensor and the illumination of both features without shadows.
Moreover, on the left of Figure 28 the constrained spaces of the first imaging device of each sensor, i.e., F2 C S1 and F2 C S3 , are visualized for the second inspection scenario.Analogously to the first scenario, two extreme viewpoints at the vertices of the manifolds were selected to assess the validity of the computed C-spaces.As expected, the real 2D images of all imaging devices and the resulting point clouds of both sensors  III) for a multisensor scenario to capture a set of features F . at two exemplary extreme viewpoints (shown on the right of Figure 27) demonstrate that all features can be successfully acquired by the four imaging devices of both sensors.4) Summary: Using an industrial RVS, and regarding real viewpoint constraints, we were able to validate the formulations, characterization, and application of C-spaces for inspection tasks in an industrial context.These experiments show the suitability of our framework for an industrial application on a real RVS with multiple range sensors.
Furthermore, our strategy for merging individual C-spaces to capture more than one feature proved to be effective for the regarded vision tasks.However, a more complex task such as the inspection of all door features requires a more complex strategy, which considers the search of features that can be acquired together.This question recalls the overall VPP, which falls outside the scope of this publication and we intend to address in our future work.

IX. SUMMARY AND OUTLOOK A. Summary
The computation of valid viewpoints considering different system constraints, named VGP in this publication, is considered a complex and unsolved challenge that lacks a generic and holistic framework for its proper formulation and resolution.In this paper, we outline the VGP as a geometric problem that can be solved explicitly in the special Euclidean SE(3) using suitable and explicit models of all related domains of an RVS and viewpoint constraints.Within this context, much of our effort was devoted to the comprehensive and systematic formulation of the VGP and the exhaustive characterization of domains and viewpoint constraints aligned to the formulation of geometric problems.
The core result of this study is the characterization of C-spaces, which can be understood as topological manifolds that span a space with infinite viewpoint solutions to acquire one feature or a group of features considering various viewpoint constraints and modeling uncertainties.Our approach focuses on providing rather infinite valid solutions instead of optimal ones.If the entire a priori knowledge of the RVS can be formalized and integrated into the C-space, then it we can assume that any viewpoint within it is a local optimum.Our work shows that a handful of viewpoint constraints can be efficiently and simply modeled geometrically and integrated in a common framework to span such constrained spaces.Finally, based on a comprehensive academic example and a real application, we demonstrate the usability of such a framework.

B. Limitations and Chances
We are aware that the framework proposed in the present study may have some limitations that may prevent its straightforward application for other RVS or use-cases.First, it must be regarded that our framework can be classified under the category of model-based approaches.Therefore, in a first step a priori information to model the components of the considered  RVS must be regarded.We consider that an exhaustive and explicit modeling of the necessary domains is necessary for delivering solutions that offer a higher generalization for other applications and systems.For the benefit of generalization, complexity reduction, and computational efficiency we regarded various simplifications, which could affect the accuracy of some models and might yield more conservative, however, more robust solutions.
We firmly believe that the VGP can be efficiently solved geometrically.We demonstrated that many constraints can be explicitly and efficiently characterized by combining several techniques, including linear algebra, trigonometry, and geometrical analysis.In the scope of our experiments, we confirmed that the computation of the C-spaces manifolds based on these approaches ran efficiently in linear times.However, we also noted that algorithms comprising CSG Boolean operations are more computationally expensive, especially on calculations considering multiple Boolean operations on the same manifold.Although this limitation can be minimized by filtering and smoothing algorithms for decimating manifolds, this characteristic could still be considered insufficient for some users and applications.Although the shortcomings of CSG Boolean techniques regarding their computational efficiency have been mentioned in some prior works, we also believe that the present available computational performance and paralleling capabilities of CPUs and GPUs require a new reevaluation of their overall performance.Additionally, our work also suggests that combined with efficient imaging processing libraries, approaches requiring heavy use of CSG operations can be efficiently used within many applications.Nevertheless, a comprehensive computational efficiency analysis to find a breakeven point between our approach and others remains to be further investigated.Moreover, we also see room for improvement to increase the efficiency of some of the algorithms presented.For instance, the computation of the occlusion space and integration of constraints could also be improved using an alternative strategy and more efficient algorithms implemented in low-level programming languages.Additionally, we also see potential for improving the efficiency of some algorithms.For instance, the performance of many algorithms could enormously benefit of computational optimization techniques such as parallelization and GPU computation.For replication purposes of our work, we encourage the reader to make a thorough evaluation of the performance of the state-of-the-art libraries available at the present time, according to their application needs and system requirements.

C. Outlook
We consider the use of C-spaces appropriate, but not limited to vision tasks that rely on features.For example, we showed how our concept could be extended to applications that generally would not consider features and demonstrate its application for an object detection problem with a certain level of spatial uncertainty.Our ongoing work concentrates on assessing further applications or systems that may benefit from our approach, e.g., feature-based robot calibration or adaption to laser sensors.Further studies should still be undertaken in this direction to verify the usability and explore the limitations of our framework within other applications and RVSs.
Recalling that we neglected any sensor parameters that may directly constrain the C-space, e.g., exposure time, gain, and others, we consider some other lines of research that integrate such a parameter space in the V-space.For instance, our ongoing study investigates the combination of a data-based approach to optimize the exposure times and the use of C-spaces for finding optimized sensor poses.
The most promising future research should be devoted to the overall problem of the VPP, which could be reformulated based on the present study and its findings.Further research that will exploit this and comprises a holistic strategy for its resolution is already in progress.
We believe that our work will serve as a solid base and guideline for further studies to adapt and extend our framework according to the individual requirements of their concrete applications and RVS.The models and approaches used should be abstracted and generalized in the best possible level so that they can be used for different components of an RVS and can be applied to solve a broad range of vision tasks.

Computational Efficiency
The methods and techniques used should strive a low level of computational complexity.Whenever possible, analytical and linear models should be preferred over complex techniques, such as stochastic and heuristic algorithms.Nevertheless, considering offline scenarios the trade-off between computing a good enough solution within an acceptable amount of time should be individually assessed.

Determinism
Due to traceability and safety issues within industrial applications deterministic approaches should be prioritized.

Modularity and Scalability
The approaches and models should consider in general a modular structure and promote their scalability.

Limited A-priori Knowledge
The parameters to implement the models and approaches should be easily accessible for the end-users.Neither in-depth optics nor robotic knowledge should be required.

Notes
The indexes r and b just apply for pose vectors, frames and transformations.

Example
The index notation can be better understood consider following examples: • d: Let the geometry of a feature be described by a surface point g f ∈ R 3 • d: If the feature comprises more surface points, then let the point with the index 2 be denoted by g f,2 ∈ R 3 .• r: Assuming that the position of a surface point g f is described in the coordinate system of the feature, B f , then it follows: f g f .In case that the base coordinate frame has the same notation as the domain itself, i.e. r = d, then just the index for the domain is given: g f = f g f .• b: In case that the frame of the surface point is given in the coordinate reference system of the object B o , then following notation applies: o g f = o f g f .(0, 0, 0) (0, 0, 0) (0, 20, 0) (0, 0, 0) (0, 0, 0) (0, 0, 0)

v 1 :Fig. 1 :
Fig. 1: Simplified, graphical representation of the Viewpoint Generation Problem (VGP): which are valid sensor poses p s to acquire a feature f considering a set of diverse viewpoint constraints C? To answer this question, this study proposes the characterization of Feature-Based Constrained Spaces (C-spaces).The C-space denoted as C can be regarded as the geometrical representation of all viewpoint constraints in the special Euclidean SE(3).Any sensor pose within it ∀ p s ∈C can be considered to be valid to acquire a feature satisfying all viewpoint constraints C. The C-space is constituted by individual C-spaces C i (c i ), i.e., geometrical representations of each viewpoint constraint c i ∈ C.

Fig. 2 :
Fig. 2: Overview of the RVS domains and kinematic model.

Fig. 5 :
Fig. 5: Abstract and simplified 2D representation of the ideal C-space C * without viewpoint constraints, if viewpoint constraints are considered, the intersection of the corresponding C-spaces, e.g., C 1 , C 2 , C 3 , forms the C-space C .

C 1 kAlgorithm 1
∈ V C 1 of the manifold C 1 .Depending on the positioning frame of the sensor B T CP s or B s1 s , the space can be computed for the TCP( T CP C 1 ) or the sensor lens( s1 C 1 ).The vertices can be straightforwardly computed following the steps given in Algorithm (1).Extreme Viewpoint Characterization of the Constrained Space C 1 1) Consider a constant sensor orientation ref r f ix s

4 )
Repeat Steps 2 and 3 for all l vertices of the frustum space.5) Connect all vertices from V ref C 1 analogously to the vertices of the frustum space V I s to obtain the ref C 1 manifold.

Algorithm 2
Homeomorphism Characterization of the Constrained Space C 1 1) Consider a constant sensor orientation ref r f ix s to acquire a feature f .2) Position the sensor reference frame at the feature's surface point origin ref p f s ( ref t s = B f ).

Fig. 7 :
Fig. 7: Characterization of different C-spaces C 1 (f 0 , I s , f0 r s ) (blue manifolds) in SE(3) considering different sensor orientations using the homeomorphism formulation.The I-spaces (green manifolds) corresponding to different evaluated extreme viewpoints demonstrate that the feature f 0 can be captured even from a sensor pose lying at the vertices of the C-space; hence, any sensor pose within the C-space p s ∈ C 1 can also be considered valid.

Fig. 14 :
Fig. 14: Characterization of diverse C-spaces in SE(3) considering the feature geometry to capture a 2D square feature f 1 and a 3D pocket feature f * 1 .The exemplary scene displays two C-spaces for acquiring feature f 1 with two different sensor orientations, C 31 ( f1 r s,1 ) and C 32 ( f1 r s,2 ), one C-space C 33 ( f * 1 r s,3 ) for capturing f * 1 , and the frames of one extreme viewpoint at each constrained space.

Fig. 16 : 6 Fig. 17 : 6 .a
Fig. 16: Overview of the computation steps of Algorithm 3 for the characterization of the occlusion space C occl,κ 6 induced by an occluding rigid body κ.

. 19 :
Fig. 19: Characterization of the C-space for the first sensor in SE(3), C S1 (blue manifold), being delimited by the C-space of the second sensor, C s2 (orange manifold without fill), to acquire a square feature f 1 .The C S2 (orange manifold) analogously characterizes the C-space for sensor s 2 considering the constraints of s 1 .

1 Fig. 20 :
Fig. 20: Verification of occlusion-free visibility at an extreme viewpoint s1 p s ∈ C S1 and s2 p s ∈ C S2 : rendered scene (left image), depth images of s 1 (right image in the upper corner) and s 2 (right image in the lower corner).

C 3 C 8 C 8 C 3 Fig. 21 :
Fig. 21: Characterization of the robot workspace as a further C-space C 8 and integration with other C-spaces, e.g., here C 3 , using a CSG Intersection Operation.

Algorithm 5
Integration of C-space for multiple features 1) Compute all n C-spaces for each feature ∀f m ∈ F with m = 1, . . ., n. fm C := C ( r f ix s , C(f m )) 2) Compute the joint C-space by intersecting all n C-spaces:

Fig. 22 : 2 p s, 1 Fig. 23 :
Fig.22:The characterization of the C-space spanned by two features is computed by intersecting its constrained spaces using the same sensor orientation.

fm C st 1 − 6 (
st r s ( s1 r f ix s )) of imaging device s t for each feature ∀f m ∈ F considering the sensor orientation of the first device s1 r f ix s and the viewpoint constraints 1 − 6.

2 Fig. 24 :
Fig. 24: Characterization of the C-space spanned by a set of viewpoint constraints (see TableIII) for a multisensor scenario to capture a set of features F .

4 sFig. 25 :
Fig. 25: Left: Verification scene visualizing the frames and I-spaces of all imaging devices at the extreme sensor poses1 p s ∈ F C S1 ( s2 p s ∈ F C S2 , s3 p s ∈ F C S3, and s4 p s ∈ F C S4 ) that fulfills all viewpoint constraints.Right: Depth images of all imaging devices at the corresponding sensor pose.

Fig. 26 :
Fig. 26: Overview of the core components of the reconfigured inspection RVS AIBox.
t o = (x o , y o , z o ) T in mm

C 3 k 2 + ς x,y l f 2 + ς y,x σ y ρ y 8 ρ x σ x l f 2 + ς x,y l f 2 +F C = f 1 C f 2 1 CFig. 33 :
Fig. 33: Characterization of the C-space, F C , in SE(3) to acquire a set of features {f 1 , f 2, } ∈ F being characterized by the intersection of the individual C-spaces f1 C and f2 C .

TABLE II :
Imaging parameters of the sensors s 1 and s 2 Step 3: Compute the pose, p f,s 2 s 1 , that the first sensor takes when computing C s 2 .Duplicate the manifold C s 2 and translate it to s 1 p f,s 2

TABLE III :
Overview and description of the viewpoint constraints considered for the simulated-based analysis.The workspace of the second imaging device is restricted in the z-axis to the following working distance z s 2 > 450 mm.

TABLE IV :
Description of General Requirements

TABLE V :
Overview of index notations for variables Notation Index Description x :=variable, parameter, vector, frame or transformation d :=RVS domain, i.e., sensor, robot, feature, object, environment or d element of a list or set :=base frame of the coordinate system B r or space of feature f b :=origin frame of the coordinate system B b

TABLE VI :
List of most common symbols

TABLE VII :
Overview of features and occlusion objects used for verification steps and simulation-based analysis

TABLE VIII :
Scaling factors for the vertices of the constrained space V