Viewpoint Generation Using FeatureBased Constrained Spaces for Robot Vision Systems
Abstract
:1. Introduction
1.1. Viewpoint Generation Problem Solved Using $\mathcal{C}$Spaces
1.2. Related Work
1.2.1. ModelBased
1.2.2. NonModel Based
1.2.3. Comparison and Need for Action
1.3. Outline
1.4. Contributions
 Mathematical, modelbased, and modular framework to formulate the VGP based on $\mathcal{C}\text{}\mathrm{space}$s and generic domain models.
 Formulation of nine viewpoint constraints using linear algebra, trigonometry, geometric analysis, and Constructive Solid Geometry (CSG) Boolean operations, in particular:
 
 Efficient and simple characterization of $\mathcal{C}\text{}\mathrm{space}$ based on sensor frustum, feature position, and feature geometry.
 
 Generic characterization of $\mathcal{C}\text{}\mathrm{space}$s to consider bistatic nature of range sensors extendable to multisensor systems.
 Exhaustive supporting material (surface models, manifolds of computed $\mathcal{C}\text{}\mathrm{space}$s, rendering results) to encourage benchmark and further development (see Supplementary Materials).
 Determinism, efficiency, and simplicity: $\mathcal{C}\text{}\mathrm{space}$s can be efficiently characterized using geometrical analysis, linear algebra, and CSG Boolean techniques.
 Generalization, transferability, and modularity: $\mathcal{C}\text{}\mathrm{space}$s can be seamlessly used and adapted for different vision tasks and RVSs, including different sensor imaging sensors (e.g., stereo, active light sensors) or even multiple range sensor systems.
 Robustness against model uncertainties: Known model uncertainties (e.g., kinematic model, sensor, or robot inaccuracies) can be explicitly modeled and integrated while characterizing $\mathcal{C}\text{}\mathrm{space}$s. If unknown model uncertainties affect a chosen viewpoint, alternative solutions guaranteeing constraint satisfiability can be found seamlessly within $\mathcal{C}\text{}\mathrm{space}$s.
2. Domain Models of a Robot Vision System
2.1. General Notes
 General Requirements This paper follows a systematic and exhaustive formulation of the VGP, the domains of an RVS, and the viewpoint constraints to characterize $\mathcal{C}\text{}\mathrm{space}$s in a generic, simple, and scalable way. To achieve this, and similar to previous studies [11,33,36], throughout our framework the following general requirements (GR) are considered: generalization, computational efficiency, determinism, modularity and scalability, and limited a priori knowledge. The given order does not consider any prioritization of the requirements. A more detailed description of the requirements can be found in Table A1.
 Terminology Based on our literature research, we have found that a common terminology has not been established yet. The employed terms and concepts depend on the related applications and hardware. To better understand the relation of our terminology to the related work and in an attempt towards standardization, whenever possible, synonyms or related concepts are provided. Please note that in some cases, the generality of some terms is prioritized over their precision. This may lead to some terms not corresponding entirely to our definition; therefore, we urge the reader to study these differences before treating them as exact synonyms.
 Notation Our publication considers many variables to describe the RVS domains comprehensively. To ease the identification and readability of variables, parameters, vectors, frames, and transformations, we use the index notation given in Table A2. Moreover, all topological spaces are given in calligraphic fonts, e.g., $\mathcal{V},\mathcal{P},\mathcal{I},\mathcal{C}$, while vectors, matrices, and rigid transformations are bold. Table A3 provides an overview of the most frequently used symbols.
2.2. General Models
 Kinematic model Each domain comprises a Kinematics subsection to describe its kinematic relationships. In particular, all necessary rigid transformations (given in the righthanded system) are introduced to calculate the sensor pose. The pose ${\mathit{p}}_{}^{}$ of any element is given by its translation ${\mathit{t}}_{}^{}\in {\mathbb{R}}^{3}$ and a rotation component that can be given as a rotation matrix ${\mathit{R}}_{}^{}\in {\mathbb{R}}^{3x3}$, ZYX Euler angles ${\mathit{r}}_{}^{}={({\alpha}^{z},{\beta}^{{}^{y}},{\gamma}^{x})}^{T}$ or a quaternion ${\mathit{q}}_{}^{}\in \mathbb{H}$. For readability and simplicity purposes, we use mostly the Euler angle representation throughout this paper. Considering the special orthogonal $SO\left(3\right)\subset {\mathbb{R}}^{3x3}$, the pose $p\in SE\left(3\right)$ is given in the special Euclidean group $SE\left(3\right)={\mathbb{R}}^{3}\times SO\left(3\right)$ [44]). ${{}^{f}\mathit{p}}_{s}^{}\in SE\left(3\right)$ in the feature’s coordinate system ${B}_{f}$:$$\begin{array}{cc}\hfill {{}^{f}\mathit{p}}_{s}^{}=& {{}_{s}{}^{f}\mathit{T}}_{}^{}\hfill \\ \hfill =& {{}_{o}{}^{f}\mathit{T}}_{f}^{}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}{{}_{w}{}^{o}\mathit{T}}_{e}^{}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}{{}_{r}{}^{w}\mathit{T}}_{e}^{}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}{{}_{s}{}^{r}\mathit{T}}_{r}^{}.\hfill \end{array}$$
 Surface model A set of 3D surface models $\kappa \in K$ characterizes the volumetric occupancy of all rigid bodies in the environment. The surface models are not always explicitly mentioned within the domains. Nevertheless, we assume that the surface model of any rigid body is required if this collides with the robot or sensor or impedes the sensor’s sight to a feature.
2.3. Object
 Kinematics The origin coordinate system of o is located at frame ${B}_{o}$. The transformation to the reference coordinate system is given in the world coordinate system ${B}_{w}$ by ${{}_{o}{}^{w}\mathit{T}}_{e}^{}$.
 Surface Model Since our approach does not focus on the object but rather on its features, the object may have an arbitrary topology.
2.4. Feature
 Kinematics We assume that the translation ${{}^{o}\mathit{t}}_{f}^{}$ and orientation ${{}^{o}\mathit{r}}_{f}^{}$ of the feature’s origin is given in the object’s coordinate system ${B}_{o}$. In the case that the feature’s orientation is given by its minimal expression, i.e., just the feature’s surface normal vector ${{}^{o}\mathit{n}}_{f}^{}$. The full orientation is calculated by letting the feature’s normal to be the basis zvector ${{}^{o}\mathit{e}}_{f}^{z}={{}^{o}\mathit{n}}_{f}^{}$ and considering the rest basis vectors ${{}^{o}\mathit{e}}_{f}^{x}$ and ${{}^{o}\mathit{e}}_{f}^{y}$ to be mutually orthonormal. The feature’s frame is given as follows:$${B}_{f}={{}^{o}\mathit{T}}_{f}^{}({{}^{o}\mathit{t}}_{f}^{},{{}^{o}\mathit{r}}_{f}^{}).$$
 Geometry While a feature can be sufficiently described by its position and normal vector, a broader formulation is required within many applications. For example, dimensional metrology tasks deal with a more comprehensive catalog of geometries, e.g., edges, pockets, holes, slots, and spheres.
 Generalization and Simplification Moreover, we consider a discretized geometry model of a feature comprising a finite set of surface points corresponding to a feature ${{\mathit{g}}_{\mathit{f}}}_{}^{}\in {G}_{f}$ with ${{\mathit{g}}_{\mathit{f}}}_{}^{}\in {\mathbb{R}}^{3}$. Since our work primarily focuses on 2D features, it is assumed that all surface points lie on the same plane, which is orthogonal to the feature’s normal vector ${{}^{o}\mathit{n}}_{f}^{}$ and colinear to the zaxis of the feature’s frame ${B}_{f}$.Towards providing a more generic feature model, the topology of all features is approximated using a square feature with a unique side length of $\left\{{l}_{f}\right\}\in {L}_{F}$ and five surface points ${{\mathit{g}}_{\mathit{f}}}_{,c}^{},\phantom{\rule{0.222222em}{0ex}}c=\{0,1,2,3,4\}$ at the center and at the four corners of the square. Figure 4 visualizes this simplification to generalize diverse feature geometries.
2.5. Sensor
 Kinematics The sensor’s kinematic model considers the following relevant frames: ${B}_{s}^{TCP}$, ${B}_{s}^{{s}_{1}}$, and ${B}_{s}^{{s}_{2}}$. Taking into account the established notation for end effectors within the robotics field, we consider that the frame ${B}_{s}^{TCP}$ lies at the sensor’s tool center point (TCP). We assume that the frame of the TCP is located at the geometric center of the frustum space and that the rigid transformation ${{}_{TCP}{}^{ref}\mathit{T}}_{s}^{}$ to a reference frame such as the sensor’s mounting point is known.Additionally, we consider that frame ${B}_{s}^{{s}_{1}}$ lies at the reference frame of the first imaging device that corresponds to the imaging parameters ${I}_{s}$. We assume that the rigid transformation ${{}_{{s}_{1}}{}^{ref}\mathit{T}}_{s}^{}$ between the sensor lens and a known reference frame is also known. ${{}_{{s}_{2}}{}^{ref}\mathit{T}}_{s}^{}$ provides the transformation of the second imaging device at the frame ${B}_{s}^{{s}_{2}}$. The second imaging device ${s}_{2}$ might be a second camera considering a stereo sensor or the light source origin in an active sensor system.
 Frustum space The frustum space $\mathcal{I}\text{}\mathrm{space}$ (related terms: visibility frustum, measurement volume, fieldofview space, sensor workspace) is described by a set of different sensor imaging parameters ${I}_{s}$, such as the depth of field ${d}_{s}$ and the horizontal and vertical field of view (FOV) angles ${\theta}_{s}^{x}$ and ${\psi}_{s}^{y}$. Alternatively, some sensor manufacturers may also provide the dimensions and locations of the near ${h}_{s}^{near}$, middle ${h}_{s}^{middle}$, and far ${h}_{s}^{far}$ viewing planes of the sensor. The sensor parameters ${I}_{s}$ allow only the topology of the $\mathcal{I}\text{}\mathrm{space}$ to be described. To fully characterize the topological space in the special Euclidean, the sensor pose ${\mathit{p}}_{s}^{}$ must be considered:$$\begin{array}{cc}\hfill {\mathcal{I}}_{s}^{}:={\mathcal{I}}_{s}^{}({\mathit{p}}_{s}^{},{I}_{s})=& \{{\mathit{p}}_{s}^{}\in SE\left(3\right),\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ({d}_{s},{h}_{s}^{near},{h}_{s}^{far},{\theta}_{s}^{x},{\psi}_{s}^{y})\in {I}_{s}\}\hfill \end{array}.$$The $\mathcal{I}\text{}\mathrm{space}$ can be straightforwardly calculated based on the kinematic relationships of the sensor and the imaging parameters. The resulting 3D manifold ${\mathcal{I}}_{s}^{}$ is described by its vertices ${\mathit{V}}_{k}^{{\mathcal{I}}_{s}^{}}:={\mathit{V}}_{k}^{}\left({\mathcal{I}}_{s}^{}\right)={({V}_{k}^{x},{V}_{k}^{y},{V}_{k}^{z})}^{T}$ with $k=1,\dots ,l$ and corresponding edges and faces. We assume that the origin of the frustum space is located at the TCP frame, i.e., ${B}_{s}^{{\mathcal{I}}_{s}}={B}_{s}^{TCP}$. The resulting shape of the $\mathcal{I}\text{}\mathrm{space}$ usually has the form of a square frustum. Figure 5 visualizes the frustum shape and the geometrical relationships of the $\mathcal{I}\text{}\mathrm{space}$.
 Range Image A range image (related terms: 3D measurement, 3D image, depth image, depth maps, point cloud) refers to the generated output of the sensor after triggering a measurement action. A range image is described as a collection of 3D points denoted by ${\mathit{g}}_{s}^{}\in {\mathbb{R}}^{3}$, where each point corresponds to a surface point of the measured object.
 Measurement accuracy The measurement accuracy depends on various sensor parameters and external factors and may vary within the frustum space [21]. If these influences are quantifiable an accuracy model can be considered within the computation of the $\mathcal{C}\text{}\mathrm{space}$. For example, ref. [34] proposed a method based on a LookUp Table to specify quality disparities within a frustum.
 Sensor Orientation When choosing the sensor pose for measuring an object’s surface point or a feature, additional constraints must be fulfilled regarding its orientation. One fundamental requirement that must be satisfied to guarantee the acquisition of a surface point is the consideration of the incidence angle ${{}^{f}\phi}_{s}^{}$ (related terms: inclination, acceptance, view, or tilt angle). This angle is expressed as the angle between the feature’s normal ${\mathit{n}}_{f}^{}$ and the sensor’s optical axis (zaxis) ${\mathit{e}}_{s}^{z}$ and can be calculated as follows:$${{}^{f}\phi}_{s}^{max}>\left{{}^{f}\phi}_{s}^{}\right,{{}^{f}\phi}_{s}^{}=arccos\left({\displaystyle \frac{{\mathit{n}}_{f}^{}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}{\mathit{e}}_{s}^{z}}{\left{\mathit{n}}_{f}^{}\right\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}\left{\mathit{e}}_{s}^{z}\right}}\right).$$The maximal incidence angle ${{}^{f}\phi}_{s}^{max}$ is normally provided by the sensor’s manufacturer. If the maximal angle is not given in the sensor specifications, some works have suggested empirical values for different systems. For example, ref. [46] propose a maximum angle of $60{}^{\circ}$ [47] suggests $45{}^{\circ}$, while [48] propose a tilt angle of $30{}^{\circ}$ to $50{}^{\circ}$. The incidence angle can also be expressed on the basis of the Euler angles (pan, tilt) around the x and yaxes: ${{}^{f}\phi}_{s}^{}({{}^{f}\beta}_{s}^{y},{{}^{f}\gamma}_{s}^{x})$.Furthermore, the rotation of the sensor around the optical axis is given by the Euler angle ${\alpha}_{s}^{z}$ (related terms: swing, twist). Normally, this angle does not directly influence the acquisition quality of the range image and can be chosen arbitrarily. Nevertheless, depending on the lighting conditions or the position of the light source while considering active systems, this angle might be more relevant and influence the acquisition parameters of the sensor, e.g., the exposure time. Additionally, if the shape of the frustum is asymmetrical, the optimization of ${\alpha}_{s}^{z}$ should be considered.
2.6. Robot
 Kinematics The robot base coordinate frame is placed at ${B}_{r}^{}$. We assume that the rigid transformations between the robot basis and the robot flange, ${{}_{{f}_{r}}{}^{r}\mathit{T}}_{r}^{}$, and between the robot flange and the sensor, ${{}_{s}{}^{{f}_{r}}\mathit{T}}_{r}^{}$, are known. We also assume that the DenavitHartenberg (DH) parameters are known and that the rigid transformation ${{}_{{f}_{r}}{}^{r}\mathit{T}}_{r}^{}\left(DH\right)$ can be calculated using an inverse kinematic model. The sensor pose in the robot’s coordinate system is given by$${{}^{r}\mathit{p}}_{s}^{}={{}_{s}{}^{r}\mathit{T}}_{r}^{}={{}_{{f}_{r}}{}^{r}\mathit{T}}_{r}^{}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}{{}_{s}{}^{{f}_{r}}\mathit{T}}_{r}^{}.$$The robot workspace is considered to be a subset in the special Euclidean, thus ${\mathcal{W}}_{r}\subseteq SE\left(3\right)$. This topological space comprises all reachable robot poses to position the sensor ${{}^{r}\mathit{p}}_{s}^{}\in {\mathcal{W}}_{r}$.
 Robot Absolute Position Accuracy It is assumed that the robot has a maximal absolute pose accuracy error of ${\u03f5}_{r}$ in its workspace and that the robot repeatability is much smaller than the absolute accuracy; hence, it is not further considered.
2.7. Environment
2.8. General Assumptions and Limitations
 Sensor compatibility with feature geometry: Our approach assumes that a feature and its entire geometry can be captured with a single range image.
 Range Image Quality: The sensor can acquire a range image of sufficient quality. Effects that may compromise the range image quality and have not been previously regarded are neglected include measurement repeatability, lighting conditions, reflection effects, and random sensor noise.
 Sensor Acquisition Parameters: Our work does not consider the optimization of acquisition sensor parameters such as exposure time, gain, and image resolution, among others.
 Robot Model: Since we assumed that a range image can just be statically acquired, a robot dynamics model is not contemplated. Hence, constraints regarding velocity, acceleration, jerk, or torque limits are not considered within the scope of our work.
3. Problem Formulation
3.1. Viewpoint and $\mathcal{V}$Space
3.2. Viewpoint Constraints
3.3. Modularization of the Viewpoint Planning Problem
3.4. The Viewpoint Generation Problem
3.5. VGP as a Geometrical Problem in the Context of Configuration Spaces
“Once the configuration space is clearly understood, many motion planning problems that appear different in terms of geometry and kinematics can be solved by the same planning algorithms. This level of abstraction is therefore very important.”[54]
3.6. VGP with Ideal $\mathcal{C}$Spaces
3.7. VGP with $\mathcal{C}$Spaces
3.7.1. Motivation
3.7.2. Formulation
 If an i constraint, ${c}_{i}$, can be spatially modeled, there exists a topological space denoted as ${\mathcal{C}}_{i}^{}$, which can be ideally formulated as a proper subset of the special Euclidean:$${\mathcal{C}}_{i}^{}\subseteq SE\left(3\right).$$In a broader definition, we consider that the topological space for each constraint is spanned by a subset of the Euclidean Space denoted as ${\mathcal{T}}_{s}^{}\subseteq {\mathbb{R}}^{3}$ and a special orthogonal group subset given by ${\mathcal{R}}_{s}^{}\subseteq SO\left(3\right)$. Hence, the topological space of a viewpoint constraint is given as follows:$$\begin{array}{cc}\hfill {\mathcal{C}}_{i}^{}=& {\mathcal{T}}_{s}^{}\times {\mathcal{R}}_{s}^{}\hfill \\ \hfill =& \{{\mathit{p}}_{s}^{}\in {\mathcal{C}}_{i}^{},{\mathit{p}}_{s}^{}\in SE\left(3\right)\}\hfill \\ \hfill =& \{{\mathit{p}}_{s}^{}({\mathit{t}}_{s}^{},{\mathit{r}}_{s}^{})\in {\mathcal{C}}_{i}^{}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \mid {\mathit{t}}_{s}^{}\in {\mathcal{T}}_{s}^{},{\mathit{r}}_{s}^{}\in {\mathcal{R}}_{s}^{}\}.\hfill \end{array}$$
 If there exists at least one sensor pose in the i $\mathcal{C}\text{}\mathrm{space}$ $\exists {\mathit{p}}_{s}^{}\in {\mathcal{C}}_{i}^{}$, then this sensor pose fulfills the viewpoint constraint ${c}_{i}$ to acquire feature f; hence, a valid viewpoint exists $(f,{\mathit{p}}_{s}^{},{c}_{i})\in {\mathcal{V}}_{}^{}$.
 If there exists a topological space, ${\mathcal{C}}_{i}^{}$, for each constraint $\forall {c}_{i}\in \tilde{C}$ then the intersection of all individual constrained spaces constitutes the joint $\mathcal{C}\text{}\mathrm{space}$ ${\mathcal{C}}_{}^{}\subseteq SE\left(3\right)$:$$\mathcal{C}\left(\tilde{C}\right)=\bigcap _{{c}_{i}\in \tilde{C}}{\mathcal{C}}_{i}\left({c}_{i}\right).$$
 If the joint constrained space is a nonempty set, i.e., $\mathcal{C}\ne \varnothing $, then there exists at least one sensor pose $\exists {\mathit{p}}_{s}\in \mathcal{C}$, and consequently a viewpoint $(f,{\mathit{p}}_{s},\tilde{C})\in \mathcal{V}$ that fulfills all viewpoint constraints.
4. Methods: Formulation, Characterization, and Verification of $\mathcal{C}$Spaces
4.1. Frustum Space, Feature Position, and Fixed Sensor Orientation
4.1.1. Formulation
4.1.2. Characterization
 Extreme Viewpoints Interpretation The simplest way to understand and visualize the topological space ${\mathcal{C}}_{1}^{}$ is to consider all possible extreme viewpoints to acquire a feature f. These viewpoints can be easily found by positioning the sensor so that each vertex (corner) of the $\mathcal{I}\text{}\mathrm{space}$, ${\mathit{V}}_{k}^{{\mathcal{I}}_{s}^{}}$, lies at the feature’s origin ${B}_{f}$, which corresponds to the position of the surface point ${{\mathit{g}}_{\mathit{f}}}_{,0}^{}$. The position of such an extreme viewpoint corresponds to the k vertex ${\mathit{V}}_{k}^{{\mathcal{C}}_{1}^{}}\in {V}_{}^{{\mathcal{C}}_{1}^{}}$ of the manifold ${\mathcal{C}}_{1}^{}$. Depending on the positioning frame of the sensor ${B}_{s}^{TCP}$ or ${B}_{s}^{{s}_{1}}$, the space can be computed for the TCP(${{}_{TCP}\mathcal{C}}_{1}^{}$) or the sensor lens (${{}_{{s}_{1}}\mathcal{C}}_{1}^{}$). The vertices can be straightforwardly computed following the steps given in Algorithm A1. The left Figure 7 illustrates the geometric relations for computing ${\mathcal{C}}_{1}^{}$ and a simplified representation of the resulting manifolds in ${\mathbb{R}}^{2}$ for the sensor TCP ${B}_{s}^{TCP}$ and lens ${B}_{s}^{{s}_{1}}$.
 Homeomorphism Formulation Note that the manifold ${{}_{ref}\mathcal{C}}_{1}^{}$ illustrated in Figure 7a has the same topology as the $\mathcal{I}\text{}\mathrm{space}$. Thus, it can be assumed there exists a homeomorphism between both spaces such that $h:{\mathcal{I}}_{s}^{}\to {\mathcal{C}}_{1}^{}$. Letting the function h correspond to a point reflection over the geometric center of the frustum space, the vertices of the manifold ${V}_{}^{{{}_{ref}\mathcal{C}}_{1}^{}}$ can be straightforwardly estimated following the steps described in the Algorithm 1. The resulting manifold for the TCP frame is shown in Figure 7b.
Algorithm 1 Homeomorphism Characterization of the Constrained Space ${\mathcal{C}}_{1}^{}$ 

 the frames of all vertices of the frustum space ${\mathit{V}}_{s,i}^{}\left({\mathcal{I}}_{s}^{}\right),i=1..j$ are known,
 the frustum space is a watertight manifold,
 and the space between connected vertices of the frustum space is linear; hence, adjacent vertices are connected only by straight edges.
4.1.3. Verification
4.1.4. Summary
4.2. Range of Orientations
4.2.1. Formulation
4.2.2. Characterization
 Discretization without Interpolation Note that ${\mathcal{C}}_{2}^{}\left({R}_{s}\right)$ spans a topological space that is just valid for the sensor orientations in ${R}_{s}$ and that the sensor orientation ${\mathit{r}}_{s}^{}$ cannot be arbitrary chosen within the range ${\mathit{r}}_{s}^{min}<{\mathit{r}}_{s}^{}<{\mathit{r}}_{s}^{max}$. This characteristic can be more easily understood by comparing the volume form, $Vol$, of the $\mathcal{C}\text{}\mathrm{space}$s ${\mathcal{C}}_{2}^{}({\mathit{r}}_{s}^{min},{\mathit{r}}_{s}^{max})$ and ${\mathcal{C}}_{2}^{}({\mathit{r}}_{s}^{min},{\mathit{r}}_{s,}^{ideal},{\mathit{r}}_{s}^{max})$, which would show that the $\mathcal{C}\text{}\mathrm{space}$ ${\mathcal{C}}_{2}^{}({\mathit{r}}_{s}^{min},{\mathit{r}}_{s}^{max})$ is less restrictive:$$Vol\left({\mathcal{C}}_{2}^{}({\mathit{r}}_{s}^{min},{\mathit{r}}_{s}^{max})\right)>Vol\left({\mathcal{C}}_{2}^{}({\mathit{r}}_{s}^{min},{\mathit{r}}_{s,}^{ideal},{\mathit{r}}_{s}^{max})\right).$$This characteristic can particularly be appreciated in the top of the ${{}_{{s}_{1}}\mathcal{C}}_{2}^{}({\mathit{r}}_{s}^{min},{\mathit{r}}_{s}^{max})$ manifold in Figure 9a. Thus, it should be kept in mind that the constrained space ${\mathcal{C}}_{2}^{}\left({R}_{s}\right)$ does not allow an explicit interpolation within the orientations of ${R}_{s}$.
 Approximation of ${\mathcal{C}}_{2}^{}$ However, as can be observed from Figure 9, the topological space spanned while considering a step size of $10{}^{\circ}$, ${\mathcal{C}}_{2}^{}\left({R}_{s}({r}_{s}^{d}=10{}^{\circ})\right)$, is almost identical to the space if we would relax the step size to $20{}^{\circ}$, ${\mathcal{C}}_{2}^{}\left({R}_{s}({r}_{s}^{d}=20{}^{\circ})\right)$. Hence, it can be assumed for this case that the $\mathcal{C}\text{}\mathrm{space}$s are almost identical and the following condition will hold:$${\mathcal{C}}_{2}^{}\left({R}_{s}({r}_{s}^{d}=10{}^{\circ})\right)\approx {\mathcal{C}}_{2}^{}\left({R}_{s}({r}_{s}^{d}=20{}^{\circ})\right).$$
4.2.3. Verification
4.2.4. Summary
4.3. Feature Geometry
4.3.1. Formulation
4.3.2. Generic Characterization
4.3.3. Characterization of the $\mathcal{C}$Space with Null Rotation
4.3.4. Rotation around One Axis
 Rotation around zaxis ${\alpha}_{s}^{z}\ne 0$ Assuming a sensor rotation around the optical axis, ${{}^{f}\mathit{r}}_{s}^{z}({\alpha}_{s}^{z}\ne 0,{\phi}_{s}({\beta}_{s}^{y},{\gamma}_{s}^{x})=0)$ (see Figure 11), the $\mathcal{C}\text{}\mathrm{space}$ is scaled just along the vertical and horizontal axes, using the following scaling factors:$$\begin{array}{ccc}{\Delta}_{k}^{x}({\mathit{r}}_{s}^{z},{l}_{f})& =& \frac{{l}_{f}}{2}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}(cos({\alpha}_{s}^{z}\left\right)+sin\left(\right{\alpha}_{s}^{z}\left\right))\\ {\Delta}_{k}^{y}({\mathit{r}}_{s}^{z},{l}_{f})& =& \frac{{l}_{f}}{2}\phantom{\rule{0.166667em}{0ex}}\xb7\phantom{\rule{0.166667em}{0ex}}(cos({\alpha}_{s}^{z}\left\right)+sin\left(\right{\alpha}_{s}^{z}\left\right))\\ {\Delta}_{k}^{z}({\mathit{r}}_{s}^{z},{l}_{f})& =& 0.\end{array}$$
 Rotation around xaxis or yaxis (${\gamma}_{s}^{x}\ne 0\u22bb{\beta}_{s}^{y}\ne 0$): A rotation of the sensor around the xaxis, ${\mathit{r}}_{s}^{}({\gamma}_{s}^{x}\ne 0,{\alpha}_{s}^{z}={\beta}_{s}^{y}=0)$, or yaxis, ${\mathit{r}}_{s}^{}({\beta}_{s}^{y}\ne 0,{\alpha}_{s}^{z}={\gamma}_{s}^{x}=0)$, requires deriving individual trigonometric relationships for each vertex of ${\mathcal{C}}_{3}^{}$. Besides the feature length, other parameters such as the FOV angles (${\theta}_{s}^{x},{\psi}_{s}^{y}$) and the direction of the rotation must be considered.The scaling factors for the eight vertices of the $\mathcal{C}\text{}\mathrm{space}$ while considering a rotation around the xaxis or yaxis can be found in Table A5 regarding the following general auxiliary lengths for ${{}^{f}\mathit{r}}_{s}^{}({\alpha}_{s}^{z}={\gamma}_{s}^{x}=0,{\beta}_{s}^{y}\ne 0)$ (left) and for ${{}^{f}\mathit{r}}_{s}^{}({\alpha}_{s}^{z}={\beta}_{s}^{y}=0,{\gamma}_{s}^{x}\ne 0)$ (right):
4.3.5. Generalization to 3D Features
4.3.6. Verification
4.3.7. Summary
4.4. Constrained Spaces Using Scaling Vectors
4.4.1. Formulation
 Integrating Multiple Constraints The characterization of a jointed constrained space, which integrates several viewpoint constraints, can be computed using different approaches. On one hand, the constrained spaces can be first computed and intersected iteratively using CSG operations, as originally proposed in Equation (12). However, if the space spanned by such viewpoint constraints can be formulated according to Equation (27), the characterization of the constrained space ${\mathcal{C}}_{}^{}$ can be more efficiently calculated by simply adding all scaling vectors:$${\mathit{V}}_{k}^{{\mathcal{C}}_{}^{}}\left(\tilde{C}\right)={\mathit{V}}_{k}^{{\mathcal{C}}_{1}^{}}\sum _{\begin{array}{c}\hfill {c}_{i}\in \tilde{C},i\ne \end{array}1}\Delta \left({c}_{i}\right).$$While the computational cost of CSG operations is at least proportional to the number of vertices between two surface models, note that the complexity of the sum of Equation (28) is just proportional to the number of viewpoint constraints.
 Compatible Constraints Within this subsection, we propose further possible viewpoint constraints that can be characterized according to the scaling formulation introduced by Equation (28).
 Kinematic errors: Considering the fourth viewpoint constraint and the assumptions addressed in Section 2.8, the maximal kinematic error $\u03f5$ is given by the sum of the alignment error ${\u03f5}_{e}$, the modeling error of the sensor imaging parameters ${\u03f5}_{s}$, and the absolute position accuracy of the robot ${\u03f5}_{r}$:$$\left\u03f5\right=\left{\u03f5}_{e}\right+\left{\u03f5}_{s}\right+\left{\u03f5}_{r}\right.$$Assuming that the total kinematic error has the same magnitude in all directions, all vertices can be equally scaled. The vertices of the $\mathcal{C}\text{}\mathrm{space}$ ${\mathcal{C}}_{4}^{}\left(\u03f5\right)$ are computed using the scaling vector ${\Delta}_{}^{}\left(\u03f5\right)$:$${\mathit{V}}_{k}^{{\mathcal{C}}_{4}^{}}\left({\Delta}_{}^{}\left(\u03f5\right)\right)={\mathit{V}}_{k}^{{\mathcal{C}}_{1}^{}}{\Delta}_{}^{}\left(\u03f5\right)).$$
 Sensor Accuracy: If the accuracy of the sensor ${a}_{s}$ can be quantified within the sensor frustum, then similarly to the kinematic error, the manifold of the $\mathcal{C}\text{}\mathrm{space}$ ${\mathcal{C}}_{5}^{}\left({a}_{s}\right)$ can be characterized using a scaling vector $\Delta \left({a}_{s}\right)$:$${\mathit{V}}_{k}^{{\mathcal{C}}_{5}^{}}\left({\Delta}_{}^{}\left({a}_{s}\right)\right)={\mathit{V}}_{k}^{{\mathcal{C}}_{1}^{}}{\Delta}_{}^{}\left({a}_{s}\right)).$$
4.4.2. Summary
4.5. Occlusion Space
4.5.1. Formulation
4.5.2. Characterization
4.5.3. Verification
4.5.4. Summary
4.6. Multisensor
4.6.1. Formulation
4.6.2. Characterization
4.6.3. Verification
4.6.4. Summary
4.7. Robot Workspace
4.7.1. Formulation
4.7.2. Characterization and Verification
4.7.3. Summary
4.8. MultiFeature Spaces
4.8.1. Characterization
4.8.2. Verification
4.8.3. Summary
4.9. Constraints Integration Strategy
5. Results
5.1. Technical Setup
5.1.1. Domain Models
 Sensors: We used two different range sensors for the individual verification of the $\mathcal{C}\text{}\mathrm{space}$s and the simulationbased and experimental analyses. The imaging parameters (cf. Section 2.5) and kinematic relations of both sensors are given in Table A6. The parameters of the lighting source of the ZEISS Comet PRO AE sensor are conservatively estimated values, which guarantee that the frustum of the sensor lies completely within the field of view of the fringe projector. A more comprehensive description of the hardware is provided in Section 5.
 Object, features, and occlusion bodies: For verification purposes, we designed an academic object comprising three features and two occlusion objects with the characteristics given in Table A7 in the Appendix A.
 Robot: We used a Fanuc M20ia sixaxis industrial robot and respective kinematic model to compute the final viewpoints to position the sensor.
5.1.2. Software
5.2. Academic SimulationBased Analysis
5.2.1. Use Case Description
5.2.2. Results
5.2.3. Summary
5.3. Real Experimental Analysis
5.3.1. System Description
5.3.2. Vision Task Definition
5.3.3. Results
5.3.4. Summary
6. Conclusions
6.1. Summary
6.2. Limitations and Chances
6.3. Outlook
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
$\mathcal{C}\text{}\mathrm{space}$  FeatureBased Constrained Space 
CSG  Constructive Solid Geometry 
$\mathcal{I}\text{}\mathrm{space}$  Frustum space 
RVS  Robot Vision System 
VGP  Viewpoint Generation Problem 
VPP  Viewpoint Planning Problem 
Appendix A. Tables
General Requirement  Description 

1. Generalization  The models and approaches used should be abstracted and generalized at the best possible level so that they can be used for different components of an RVS and can be applied to solve a broad range of vision tasks. 
2. Computational Efficiency  The methods and techniques used should strive towards a low level of computational complexity. Whenever possible, analytical, and linear models should be preferred over complex techniques, such as stochastic and heuristic algorithms. Nevertheless, when considering offline scenarios, the tradeoff between computing a good enough solution within an acceptable amount of time should be individually assessed. 
3. Determinism  Due to traceability and safety issues within industrial applications deterministic approaches should be prioritized. 
4. Modularity and Scalability  The approaches and models should consider in general a modular structure and promote their scalability. 
5. Limited a priori Knowledge  The parameters required to implement the models and approaches should be easily accessible for the endusers. Neither indepth optics nor robotic knowledge should be required. 
Notation  Index Description 

${{}_{b}{}^{r}x}_{d}^{n}$ 

Notes  The indices r and b just apply for pose vectors, frames, and transformations. 
Example  The index notation can be better understood consider following examples:

Symbol  Description 

General  
c  Viewpoint constraint 
$\tilde{C}$  Set of viewpoint constraints 
f  Feature 
F  Set of features 
${s}_{t}$  t imaging device of sensor s 
v  Viewpoint 
${\mathit{p}}_{s}^{}$  Sensor pose in $SE\left(3\right)$ 
Spatial Dimensions  
B  Frame 
${\mathit{t}}_{}^{}$  Translation vector in ${\mathbb{R}}^{3}$ 
${\mathit{r}}_{}^{}$  Orientation matrix in ${\mathbb{R}}^{3x3}$ 
$\mathit{V}$  Manifold vertex in ${\mathbb{R}}^{3}$ 
Topological Spaces  
${\mathcal{C}}_{}^{}$  $\mathcal{C}\text{}\mathrm{space}$ for a set of viewpoint constraints $\tilde{C}$ 
${\mathcal{C}}_{i}^{}$  i $\mathcal{C}\text{}\mathrm{space}$ of the viewpoint constraint ${c}_{i}$ 
${\mathcal{I}}_{s}^{}$  Frustum space 
Viewpoint Constraint  Description 

1. Frustum Space  The most restrictive and fundamental constraint is given by the imaging capabilities of the sensor. This constraint is fulfilled if at least the feature’s origin lies within the frustum space (cf. Section 2.5). 
2. Sensor Orientation  Due to specific sensor limitations, it is necessary to ensure that the maximal permitted incidence angle between the optical axis and the feature normal lies within an specified range; see Equation (5). 
3. Feature Geometry  This constraint can be considered an extension of the first viewpoint constraint and is fulfilled if all surface points of a feature can be acquired by a single viewpoint, hence lying within the image space. 
4. Kinematic Error  Within the context of real applications, model uncertainties affecting the nominal sensor pose compromise a viewpoint’s validity. Hence, any factor, e.g., kinematic alignment, robot’s pose accuracy, which affects the overall kinematic chain of the RVS must be considered (see Section 2.6). 
5. Sensor Accuracy  Acknowledging that the sensor accuracy may vary within the sensor image space (see Section 2.5), we consider that a valid viewpoint must ensure that a feature must be acquired within a sufficient quality. 
6. Feature Occlusion  A viewpoint can be considered valid if a free line of sight exists from the sensor to the feature. More specifically, it must be assured that no rigid bodies are blocking the view between the sensor and the feature. 
7. Bistatic Sensor and Multisensor  Recalling the bistatic nature of range sensors, we consider that all viewpoint constraints must be valid for all lenses or active sources. Furthermore, we also extend this constraint for considering a multisensor RVS comprising more than one range sensor. 
8. Robot Workspace  The workspace of the whole RVS is limited primarily by the robot’s workspace. Thus, we assume that a valid viewpoint exists if the sensor pose lies within the robot workspace. 
9. MultiFeature  Considering a multifeature scenario, where more than one feature can be acquired from the same sensor pose, we assume that all viewpoint constraints for each feature must be satisfied within the same viewpoint. 
k Vertex of ${\mathit{V}}_{\mathit{k}}^{{\mathcal{C}}_{3}^{}}$  Rotation around yaxis  Rotation around xaxis  

${{}^{\mathit{f}}\mathit{r}}_{\mathit{s}}^{}({\mathsf{\alpha}}_{\mathit{s}}^{\mathit{z}}={\mathsf{\gamma}}_{\mathit{s}}^{\mathit{x}}=0,{\mathsf{\beta}}_{\mathit{s}}^{\mathit{y}}\ne 0)$  ${{}^{\mathit{f}}\mathit{r}}_{\mathit{s}}^{}({\mathsf{\alpha}}_{\mathit{s}}^{\mathit{z}}={\mathsf{\beta}}_{\mathit{s}}^{\mathit{y}}=0,{\mathsf{\gamma}}_{\mathit{s}}^{\mathit{x}}\ne 0)$  
${\Delta}_{\mathit{k}}^{\mathit{x}}$  ${\Delta}_{\mathit{k}}^{\mathit{y}}$  ${\Delta}_{\mathit{k}}^{\mathit{z}}$  ${\Delta}_{\mathit{k}}^{\mathit{x}}$  ${\Delta}_{\mathit{k}}^{\mathit{y}}$  ${\Delta}_{\mathit{k}}^{\mathit{z}}$  
${\mathsf{\beta}}_{\mathit{s}}^{\mathit{y}}<\mathbf{0}$  ${\mathsf{\beta}}_{\mathit{s}}^{\mathit{y}}>\mathbf{0}$  ${\mathsf{\gamma}}_{\mathit{s}}^{\mathit{x}}<\mathbf{0}$  ${\mathsf{\gamma}}_{\mathit{s}}^{\mathit{x}}>\mathbf{0}$  
1  ${\lambda}^{x}$  ${\rho}^{x}$  $\frac{{l}_{f}}{2}$  ${\rho}^{z,y}$  $\frac{{l}_{f}}{2}$  ${\lambda}^{y}$  ${\rho}^{y}$  ${\rho}^{z,x}$ 
2  ${\rho}^{x}$  ${\lambda}^{x}$  $\frac{{l}_{f}}{2}$  $\frac{{l}_{f}}{2}$  ${\lambda}^{y}$  ${\rho}^{y}$  
3  ${\sigma}^{x}$  ${\rho}^{x}$  $\frac{{l}_{f}}{2}+{\mathsf{\varsigma}}^{x,y}$  $\frac{{l}_{f}}{2}+{\mathsf{\varsigma}}^{y,x}$  ${\rho}^{y}$  ${\sigma}^{y}$  
4  ${\rho}^{x}$  ${\sigma}^{x}$  $\frac{{l}_{f}}{2}+{\mathsf{\varsigma}}^{x,y}$  $\frac{{l}_{f}}{2}+{\mathsf{\varsigma}}^{y,x}$  ${\rho}^{y}$  ${\sigma}^{y}$  
5  ${\lambda}^{x}$  ${\rho}^{x}$  $\frac{{l}_{f}}{2}$  $\frac{{l}_{f}}{2}$  ${\rho}^{y}$  ${\lambda}^{y}$  
6  ${\rho}^{x}$  ${\lambda}^{x}$  $\frac{{l}_{f}}{2}$  $\frac{{l}_{f}}{2}$  ${\rho}^{y}$  ${\lambda}^{y}$  
7  ${\sigma}^{x}$  ${\rho}^{x}$  $\frac{{l}_{f}}{2}+{\mathsf{\varsigma}}^{x,y}$  $\frac{{l}_{f}}{2}+{\mathsf{\varsigma}}^{y,x}$  ${\sigma}^{y}$  ${\rho}^{y}$  
8  ${\rho}^{x}$  ${\sigma}^{x}$  $\frac{{l}_{f}}{2}+{\mathsf{\varsigma}}^{x,y}$  $\frac{{l}_{f}}{2}+{\mathsf{\varsigma}}^{y,x}$  ${\sigma}^{y}$  ${\rho}^{y}$ 
Range Sensor  1  2  
Manufacturer  Carl Zeiss Optotechnik GmbH, Neubeuern, Germany  Roboception, Munich, Germany  
Model  COMET Pro AE  rc_visard 65  
3D Acquisition Method  Digital Fringe Projection  Stereo Vision  
Imaging Device ${s}_{t}$  Monochrome Camera: ${s}_{1}$  Blue Light LEDFringe Projector: ${s}_{2}$  Two monochrome cameras: ${s}_{3},$ ${s}_{4}$ 
Field of view  ${\theta}_{s}^{x}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}51.5\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$, ${\psi}_{s}^{y}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}35.5\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$  ${\theta}_{s}^{x}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}70.8\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$, ${\psi}_{s}^{y}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}43.6\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$  ${\theta}_{s}^{x}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}62.0\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$, ${\psi}_{s}^{y}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}48.0\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$ 
Working distances and near, middle, and far planes relative to imaging devices lens.  $\begin{array}{cc}\hfill @400\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}:& (396\times 266)\phantom{\rule{0.166667em}{0ex}}{\mathrm{mm}}^{2}\hfill \\ \hfill @600\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}:& (588\times 392)\phantom{\rule{0.166667em}{0ex}}{\mathrm{mm}}^{2}\hfill \\ \hfill @800\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}:& (780\times 520)\phantom{\rule{0.166667em}{0ex}}{\mathrm{mm}}^{2}\hfill \end{array}$  $\begin{array}{c}@200\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}:(284\times 160)\phantom{\rule{0.166667em}{0ex}}{\mathrm{mm}}^{2}\hfill \\ @600\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}:(853\times 480)\phantom{\rule{0.166667em}{0ex}}{\mathrm{mm}}^{2}\hfill \\ @1000\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}:(1422\times 800)\phantom{\rule{0.166667em}{0ex}}{\mathrm{mm}}^{2}\hfill \end{array}$  $\begin{array}{c}@200\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}:(118\times 178)\phantom{\rule{0.166667em}{0ex}}{\mathrm{mm}}^{2}\hfill \\ @600\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}:(706\times 534)\phantom{\rule{0.166667em}{0ex}}{\mathrm{mm}}^{2}\hfill \\ @1000\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}:(1178\times 890)\phantom{\rule{0.166667em}{0ex}}{\mathrm{mm}}^{2}\hfill \end{array}$ 
Transformation between sensor lens and TCP ${{}_{{s}_{t}}{}^{TCP}\mathit{T}}_{}^{}$  ${{}_{{s}_{1}}{}^{TCP}\mathit{t}}_{}^{}:(0,0,602)\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}$ ${{}_{{s}_{1}}{}^{TCP}\mathit{r}}_{}^{}:(0,0,0)\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$  ${{}_{{s}_{2}}{}^{TCP}\mathit{t}}_{}^{}:(0,0,600)\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}$ ${{}_{{s}_{2}}{}^{TCP}\mathit{r}}_{}^{}:(0,0,0)\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$  ${{}_{{s}_{3,4}}{}^{TCP}\mathit{t}}_{}^{}:(0,0,600)\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}$${{}_{{s}_{3,4}}{}^{TCP}\mathit{r}}_{}^{}:(0,0,0)\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$ 
Transformation between imaging devices of each sensor ${{}_{{s}_{2}}{}^{{s}_{1}}\mathit{T}}_{}^{}$,${{}_{{s}_{4}}{}^{{s}_{3}}\mathit{T}}_{}^{}$  ${{}_{{s}_{2}}{}^{{s}_{1}}\mathit{t}}_{}^{}:(217.0,0,8.0)\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}$ ${{}_{{s}_{2}}{}^{{s}_{1}}\mathit{r}}_{}^{}:(0,20.0,0)\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$  ${{}_{{s}_{4}}{}^{{s}_{3}}\mathit{t}}_{}^{}:(65.0,0,0)\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}$ ${{}_{{s}_{4}}{}^{{s}_{3}}\mathit{r}}_{}^{}:(0,0,0)\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$  
Transformation between both sensors ${{}_{{s}_{3}}{}^{{s}_{1}}\mathit{T}}_{}^{}$  ${{}_{{s}_{3}}{}^{{s}_{1}}\mathit{t}}_{}^{}:(348.0,81.0,42.0)\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}$, ${{}_{{s}_{3}}{}^{{s}_{1}}\mathit{r}}_{}^{}:(0.52,0.56,0.34)\phantom{\rule{0.166667em}{0ex}}{}^{\circ}$ 
Feature  ${f}_{0}$  ${f}_{1}$,${f}_{1}^{*}$  ${f}_{2}$  ${f}_{3}$  ${\kappa}_{1}$  ${\kappa}_{2}$ 
Topology  Point  Slot  Circle  HalfSphere  Icosahedron  Octaeder 
Generalized Topology    Square  Square  Cube     
Dimensions in mm  ${l}_{{f}_{o}}=0$  ${l}_{{f}_{1}}=50$, ${h}_{{f}_{1}^{*}}=30$  ${l}_{{f}_{2}}=20$  ${l}_{{f}_{2}}=40$, ${h}_{{f}_{2}}=40$  edge length: $\approx 14.0$  edge length: $\approx 20.0$ 
Translation vector in object’s frame ${t}_{o}^{}={({x}_{o},{y}_{o},{z}_{o})}^{T}$ in mm  $\left(\begin{array}{c}0.0\\ 0.0\\ 0.0\end{array}\right)$  $\left(\begin{array}{c}0.0\\ 0.0\\ 0.0\end{array}\right)$  $\left(\begin{array}{c}75.0\\ 150.0\\ 20.0\end{array}\right)$  $\left(\begin{array}{c}120.0\\ 30.0\\ 0.0\end{array}\right)$  $\left(\begin{array}{c}67.5\\ 0.0\\ 240.0\end{array}\right)$  $\left(\begin{array}{c}117.5\\ 100.0\\ 445.0\end{array}\right)$ 
Rotation in Euler Angles in object’s frame ${r}_{o}^{}({\gamma}_{s}^{x},{\beta}_{s}^{y},{\alpha}_{s}^{z})$ in ${}^{\circ}$  $(0,0,0)$  $(0,0,0)$  $(0,20,0)$  $(0,0,0)$  $(0,0,0)$  $(0,0,0)$ 
Viewpoint Constraint  Description  Approach 

1  Two sensors(${s}^{1}$, ${s}^{2}$) with two imaging devices each: $\{{s}_{1}^{1},{s}_{2}^{1},{s}_{3}^{2},{s}_{4}^{2}\}\in \tilde{S}$. The imaging parameters of all devices are specified in Table A6.  Linear algebra and geometry 
2  Relative orientation to the object’s frame: ${{}^{o}\mathit{r}}_{s}^{}({\alpha}_{s}^{z}={\gamma}_{s}^{x}=0,{\beta}_{s}^{y}=6.64{}^{\circ})$.  Linear algebra and geometry 
3  A planar rectangular object with three different features $\{{f}_{1},{f}_{2},{f}_{3}\}\in F$ (see Table A7).  Linear algebra, geometry, and trigonometry 
4–5  The workspace of the second imaging device is restricted in the zaxis to the following working distance ${z}_{{s}_{2}}>450\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}$.  Linear algebra and geometry 
6  Two objects with the form of an icosahedron (${\kappa}_{1}$) and a octahedron (${\kappa}_{2}$) occlude the visibility of the features.  Linear algebra, raycasting, and CSG Boolean Operations 
7  All constraints must be satisfied by all four imaging devices simultaneously.  Linear algebra and CSG Boolean Operations 
8  Both sensors are attached to a sixaxis industrial robot. The robot has a workspace of a halfsphere with a working distance of $1000\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}$–$1800\phantom{\rule{0.166667em}{0ex}}\mathrm{mm}$.  CSG Boolean Operation 
9  All features from the set G must be captured simultaneously.  CSG Boolean Operation 
Appendix B. Algorithms
Algorithm A1 Extreme Viewpoint Characterization of the Constrained Space ${\mathcal{C}}_{1}^{}$. 

Algorithm A2 Characterization of the occlusion space ${\mathcal{C}}_{6}^{occl}$. 

Algorithm A3 Characterization of $\mathcal{C}\text{}\mathrm{space}$ ${\mathcal{C}}_{}^{{\tilde{S}}_{1}}$ to integrate viewpoint constraints of a second imaging device ${s}_{2}$. 

Algorithm A4 Integration of $\mathcal{C}\text{}\mathrm{space}$ for multiple features. 

Algorithm A5 Strategy for the integration of viewpoint constraints. 

Algorithm A6 Computation of View Rays for Occlusion Space. 

Appendix C. Figures
References
 Kragic, D.; Daniilidis, K. 3D Vision for Navigation and Grasping. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 811–824. [Google Scholar] [CrossRef]
 PeuzinJubert, M.; Polette, A.; Nozais, D.; Mari, J.L.; Pernot, J.P. Survey on the View Planning Problem for Reverse Engineering and Automated Control Applications. Comput.Aided Des. 2021, 141, 103094. [Google Scholar] [CrossRef]
 Gospodnetić, P.; Mosbach, D.; Rauhut, M.; Hagen, H. Viewpoint placement for inspection planning. Mach. Vis. Appl. 2022, 33, 2. [Google Scholar] [CrossRef]
 Chen, S.; Li, Y.; Kwok, N.M. Active vision in robotic systems: A survey of recent developments. Int. J. Robot. Res. 2011, 30, 1343–1377. [Google Scholar] [CrossRef]
 Tarabanis, K.A.; Allen, P.K.; Tsai, R.Y. A survey of sensor planning in computer vision. IEEE Trans. Robot. Autom. 1995, 11, 86–104. [Google Scholar] [CrossRef] [Green Version]
 Tan, C.S.; MohdMokhtar, R.; Arshad, M.R. A Comprehensive Review of Coverage Path Planning in Robotics Using Classical and Heuristic Algorithms. IEEE Access 2021, 9, 119310–119342. [Google Scholar] [CrossRef]
 Tarabanis, K.A.; Tsai, R.Y.; Allen, P.K. The MVP sensor planning system for robotic vision tasks. IEEE Trans. Robot. Autom. 1995, 11, 72–85. [Google Scholar] [CrossRef] [Green Version]
 Scott, W.R.; Roth, G.; Rivest, J.F. View planning for automated threedimensional object reconstruction and inspection. ACM Comput. Surv. (CSUR) 2003, 35, 64–96. [Google Scholar] [CrossRef]
 Mavrinac, A.; Chen, X. Modeling Coverage in Camera Networks: A Survey. Int. J. Comput. Vis. 2013, 101, 205–226. [Google Scholar] [CrossRef]
 Kritter, J.; Brévilliers, M.; Lepagnot, J.; Idoumghar, L. On the optimal placement of cameras for surveillance and the underlying set cover problem. Appl. Soft Comput. 2019, 74, 133–153. [Google Scholar] [CrossRef]
 Scott, W.R. PerformanceOriented View Planning for Automated Object Reconstruction. Ph.D. Thesis, University of Ottawa, Ottawa, ON, Canada, 2002. [Google Scholar]
 Cowan, C.K.; Kovesi, P.D. Automatic sensor placement from vision task requirements. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 407–416. [Google Scholar] [CrossRef]
 Cowan, C.K.; Bergman, A. Determining the camera and light source location for a visual task. In Proceedings of the 1989 International Conference on Robotics and Automation, Scottsdale, AZ, USA, 14–19 May 1989; IEEE Computer Society Press: Washington, DC, USA, 1989; pp. 509–514. [Google Scholar] [CrossRef]
 Tarabanis, K.; Tsai, R.Y. Computing occlusionfree viewpoints. In Proceedings of the 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, IL, USA, 15–18 June 1992; IEEE Computer Society Press: Washington, DC, USA, 1992; pp. 802–807. [Google Scholar] [CrossRef]
 Tarabanis, K.; Tsai, R.Y.; Kaul, A. Computing occlusionfree viewpoints. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 279–292. [Google Scholar] [CrossRef] [Green Version]
 Abrams, S.; Allen, P.K.; Tarabanis, K. Computing Camera Viewpoints in an Active Robot Work Cell. Int. J. Robot. Res. 1999, 18, 267–285. [Google Scholar] [CrossRef]
 Reed, M. Solid Model Acquisition from Range Imagery. Ph.D. Thesis, Columbia University, New York, NY, USA, 1998. [Google Scholar]
 Reed, M.K.; Allen, P.K. Constraintbased sensor planning for scene modeling. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1460–1467. [Google Scholar] [CrossRef] [Green Version]
 Tarbox, G.H.; Gottschlich, S.N. IVIS: An integrated volumetric inspection system. Comput. Vis. Image Underst. 1994, 61, 430–444. [Google Scholar] [CrossRef]
 Tarbox, G.H.; Gottschlich, S.N. Planning for complete sensor coverage in inspection. Comput. Vis. Image Underst. 1995, 61, 84–111. [Google Scholar] [CrossRef]
 Scott, W.R. Modelbased view planning. Mach. Vis. Appl. 2009, 20, 47–69. [Google Scholar] [CrossRef] [Green Version]
 Gronle, M.; Osten, W. View and sensor planning for multisensor surface inspection. Surf. Topogr. Metrol. Prop. 2016, 4, 024009. [Google Scholar] [CrossRef]
 Jing, W.; Polden, J.; Goh, C.F.; Rajaraman, M.; Lin, W.; Shimada, K. Samplingbased coverage motion planning for industrial inspection application with redundant robotic system. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 5211–5218. [Google Scholar] [CrossRef]
 Mosbach, D.; Gospodnetić, P.; Rauhut, M.; Hamann, B.; Hagen, H. FeatureDriven Viewpoint Placement for ModelBased Surface Inspection. Mach. Vis. Appl. 2021, 32, 8. [Google Scholar] [CrossRef]
 Trucco, E.; Umasuthan, M.; Wallace, A.M.; Roberto, V. Modelbased planning of optimal sensor placements for inspection. IEEE Trans. Robot. Autom. 1997, 13, 182–194. [Google Scholar] [CrossRef]
 Pito, R. A solution to the next best view problem for automated surface acquisition. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 1016–1030. [Google Scholar] [CrossRef]
 Stößel, D.; Hanheide, M.; Sagerer, G.; Krüger, L.; Ellenrieder, M. Feature and viewpoint selection for industrial car assembly. In Proceedings of the Joint Pattern Recognition Symposium, Tübingen, Germany, 30 August–1 September 2004; Springer: Berlin/Heidelberg, Germany; pp. 528–535. [Google Scholar] [CrossRef] [Green Version]
 Ellenrieder, M.M.; Krüger, L.; Stößel, D.; Hanheide, M. A versatile modelbased visibility measure for geometric primitives. In Proceedings of the Scandinavian Conference on Image Analysis, Joensuu, Finland, 19–22 June 2005; pp. 669–678. [Google Scholar] [CrossRef]
 Raffaeli, R.; Mengoni, M.; Germani, M.; Mandorli, F. Offline view planning for the inspection of mechanical parts. Int. J. Interact. Des. Manuf. (IJIDeM) 2013, 7, 1–12. [Google Scholar] [CrossRef]
 Koutecký, T.; Paloušek, D.; Brandejs, J. Sensor planning system for fringe projection scanning of sheet metal parts. Measurement 2016, 94, 60–70. [Google Scholar] [CrossRef]
 Lee, K.H.; Park, H.P. Automated inspection planning of freeform shape parts by laser scanning. Robot.Comput.Integr. Manuf. 2000, 16, 201–210. [Google Scholar] [CrossRef]
 Derigent, W.; Chapotot, E.; Ris, G.; Remy, S.; Bernard, A. 3D Digitizing Strategy Planning Approach Based on a CAD Model. J. Comput. Inf. Sci. Eng. 2006, 7, 10–19. [Google Scholar] [CrossRef]
 Tekouo Moutchiho, W.B. A New Programming Approach for RobotBased Flexible Inspection Systems. Ph.D. Thesis, Technical University of Munich, Munich, Germany, 2012. [Google Scholar]
 Park, J.; Bhat, P.C.; Kak, A.C. A Lookup Table Based Approach for Solving the Camera Selection Problem in Large Camera Networks. In Workshop on Distributed Smart Cameras in conjunction with ACM SenSys; Association for Computing Machinery: New York, NY, USA, 2006; pp. 72–76. [Google Scholar]
 GonzálezBanos, H. A randomized artgallery algorithm for sensor placement. In Proceedings of the Seventeenth Annual Symposium on Computational GeometrySCG ’01; Souvaine, D.L., Ed.; Association for Computing Machinery: New York, NY, USA, 2001; pp. 232–240. [Google Scholar] [CrossRef] [Green Version]
 Chen, S.Y.; Li, Y.F. Automatic sensor placement for modelbased robot vision. IEEE Trans. Syst. Man Cybern. Part Cybern. Publ. IEEE Syst. Man Cybern. Soc. 2004, 34, 393–408. [Google Scholar] [CrossRef] [Green Version]
 Erdem, U.M.; Sclaroff, S. Automated camera layout to satisfy taskspecific and floor planspecific coverage requirements. Comput. Vis. Image Underst. 2006, 103, 156–169. [Google Scholar] [CrossRef] [Green Version]
 Mavrinac, A.; Chen, X.; AlarconHerrera, J.L. Semiautomatic ModelBased View Planning for Active Triangulation 3D Inspection Systems. IEEE/ASME Trans. Mechatron. 2015, 20, 799–811. [Google Scholar] [CrossRef]
 Glorieux, E.; Franciosa, P.; Ceglarek, D. Coverage path planning with targetted viewpoint sampling for robotic freeform surface inspection. Robot.Comput.Integr. Manuf. 2020, 61, 101843. [Google Scholar] [CrossRef]
 Chen, S.Y.; Li, Y.F. Vision sensor planning for 3D model acquisition. IEEE Trans. Syst. Man Cybern. Part Cybern. Publ. IEEE Syst. Man Cybern. Soc. 2005, 35, 894–904. [Google Scholar] [CrossRef] [PubMed] [Green Version]
 VasquezGomez, J.I.; Sucar, L.E.; MurrietaCid, R.; LopezDamian, E. Volumetric Nextbestview Planning for 3D Object Reconstruction with Positioning Error. Int. J. Adv. Robot. Syst. 2014, 11, 159. [Google Scholar] [CrossRef]
 Kriegel, S.; Rink, C.; Bodenmüller, T.; Suppa, M. Efficient nextbestscan planning for autonomous 3D surface reconstruction of unknown objects. J.RealTime Image Process. 2015, 10, 611–631. [Google Scholar] [CrossRef]
 Lauri, M.; Pajarinen, J.; Peters, J.; Frintrop, S. MultiSensor NextBestView Planning as MatroidConstrained Submodular Maximization. IEEE Robot. Autom. Lett. 2020, 5, 5323–5330. [Google Scholar] [CrossRef]
 Waldron, K.J.; Schmiedeler, J. Kinematics. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 11–36. [Google Scholar] [CrossRef]
 Beyerer, J.; Puente León, F.; Frese, C. Machine Vision; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar] [CrossRef]
 Biagio, M.S.; BeltránGonzález, C.; Giunta, S.; Del Bue, A.; Murino, V. Automatic inspection of aeronautic components. Mach. Vis. Appl. 2020, 28, 591–605. [Google Scholar] [CrossRef]
 Bertagnolli, F. Robotergestützte Automatische Digitalisierung von Werkstückgeometrien Mittels Optischer Streifenprojektion; Messtechnikund Sensorik, Shaker: Aachen, Germany, 2006. [Google Scholar]
 Raffaeli, R.; Mengoni, M.; Germani, M. Context Dependent Automatic View Planning: The Inspection of Mechanical Components. Comput. Aided Des. Appl. 2013, 10, 111–127. [Google Scholar] [CrossRef]
 Beasley, J.; Chu, P. A genetic algorithm for the set covering problem. Eur. J. Oper. Res. 1996, 94, 392–404. [Google Scholar] [CrossRef]
 Mittal, A.; Davis, L.S. A General Method for Sensor Planning in MultiSensor Systems: Extension to Random Occlusion. Int. J. Comput. Vis. 2007, 76, 31–52. [Google Scholar] [CrossRef]
 Kaba, M.D.; Uzunbas, M.G.; Lim, S.N. A Reinforcement Learning Approach to the View Planning Problem. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5094–5102. [Google Scholar] [CrossRef] [Green Version]
 LozanoPérez, T. Spatial Planning: A Configuration Space Approach. In Autonomous Robot Vehicles; Cox, I.J., Wilfong, G.T., Eds.; Springer: New York, NY, USA, 1990; pp. 259–271. [Google Scholar] [CrossRef]
 Latombe, J.C. Robot Motion Planning; The Springer International Series in Engineering and Computer Science, Robotics; Springer: Boston, MA, USA, 1991; Volume 124. [Google Scholar] [CrossRef]
 LaValle, S.M. Planning Algorithms; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
 Ghallab, M.; Nau, D.S.; Traverso, P. Automated Planning; Morgan Kaufmann and Oxford; Elsevier Science: San Francisco, CA, USA, 2004. [Google Scholar]
 Frühwirth, T.; Abdennadher, S. Essentials of Constraint Programming; Cognitive Technologies; Springer: Berlin, Germany; London, UK, 2011. [Google Scholar]
 Trimesh. Trimesh Github Repository. Available online: https://github.com/mikedh/trimesh (accessed on 16 July 2023).
 Roth, S.D. Ray casting for modeling solids. Comput. Graph. Image Process. 1982, 18, 109–144. [Google Scholar] [CrossRef]
 Glassner, A.S. An Introduction to Ray Tracing; Academic: London, UK, 1989. [Google Scholar]
 Edelsbrunner, H.; Kirkpatrick, D.; Seidel, R. On the shape of a set of points in the plane. IEEE Trans. Inf. Theory 1983, 29, 551–559. [Google Scholar] [CrossRef] [Green Version]
 Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson Surface Reconstruction. In Proceedings of the Fourth Eurographics Symposium on Geometry Processing, Cagliari, Italy, 26–28 June 2006; Eurographics Association: Goslar, Germany, 2006; pp. 61–70. [Google Scholar]
 Bauer, P.; Heckler, L.; Worack, M.; Magaña, A.; Reinhart, G. Registration strategy of point clouds based on regionspecific projections and virtual structures for robotbased inspection systems. Measurement 2021, 185, 109963. [Google Scholar] [CrossRef]
 Quigley, M.; Gerkey, B.; Conley, K.; Faust, J.; Foote, T.; Leibs, J.; Berger, E.; Wheeler, R.; Ng, A. ROS: An opensource Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software; IEEE: Piscataway, NJ, USA, 2009; Volume 3, p. 5. [Google Scholar]
 Magaña, A.; Bauer, P.; Reinhart, G. Concept of a learning knowledgebased system for programming industrial robots. Procedia CIRP 2019, 79, 626–631. [Google Scholar] [CrossRef]
 Magaña, A.; Gebel, S.; Bauer, P.; Reinhart, G. KnowledgeBased ServiceOriented System for the Automated Programming of RobotBased Inspection Systems. In Proceedings of the 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vienna, Austria, 8–11 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1511–1518. [Google Scholar] [CrossRef]
 Zhou, Q.; Grinspun, E.; Zorin, D.; Jacobson, A. Mesh arrangements for solid geometry. ACM Trans. Graph. 2016, 35, 1–15. [Google Scholar] [CrossRef] [Green Version]
 Wald, I.; Woop, S.; Benthin, C.; Johnson, G.S.; Ernst, M. Embree. ACM Trans. Graph. 2014, 33, 1–8. [Google Scholar] [CrossRef]
 Unity Technologies. Unity. Available online: https://unity.com (accessed on 16 July 2023).
 Bischoff, M. ROS #. Available online: https://github.com/MartinBischoff/rossharp (accessed on 16 July 2023).
 Magaña, A.; Wu, H.; Bauer, P.; Reinhart, G. PoseNetwork: Pipelin