Open Access
This article is

- freely available
- re-usable

*Algorithms*
**2008**,
*1*(2),
43-51;
https://doi.org/10.3390/a1020043

Article

A PTAS For The k-Consensus Structures Problem Under Squared Euclidean Distance

^{1}

David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada N2L 3G1

^{2}

Department of Computer Science and Communication Engineering, Kyushu University, Fukuoka 819-0395, Japan

^{3}

Department of Mathematics, National University of Singapore, Singapore 117543

^{*}

Author to whom correspondence should be addressed.

Received: 5 September 2008; in revised form: 1 October 2008 / Accepted: 9 October 2008 / Published: 9 October 2008

## Abstract

**:**

In this paper we consider a basic clustering problem that has uses in bioinformatics. A structural fragment is a sequence of ℓ points in a 3D space, where ℓ is a fixed natural number. Two structural fragments ${f}_{1}$ and ${f}_{2}$ are equivalent if and only if ${f}_{1}={f}_{2}\xb7R+\tau $ under some rotation R and translation τ. We consider the distance between two structural fragments to be the sum of the squared Euclidean distance between all corresponding points of the structural fragments. Given a set of n structural fragments, we consider the problem of finding k (or fewer) structural fragments ${g}_{1},{g}_{2},\dots ,{g}_{k}$, so as to minimize the sum of the distances between each of ${f}_{1},{f}_{2},\dots ,{f}_{n}$ to its nearest structural fragment in ${g}_{1},\dots ,{g}_{k}$. In this paper we show a polynomial-time approximation scheme (PTAS) for the problem through a simple sampling strategy.

Keywords:

Clustering 3D point sequences; squared Euclidean distance; algorithm; polynomial-time approximation scheme.## 1. Introduction

In this paper we consider the problem of clustering similar sequences of 3D points. Two such sequences of points are considered the same if they are equivalent under rotation and translation. The scenario which we consider is as follows. Suppose there is an original sequence of points that gave rise to a few variations of itself, through slight changes in some or all of its points. Now given these variations of the sequence, we are to reconstruct the original sequence. A likely candidate for such an original sequence would be a sequence which is “nearest" in terms of some distance measure, to the variations.

A more complicated scenario involves k original sequences of the same length. Formally, we formulate the problem as follows. Given n sequences of points ${f}_{1},{f}_{2},\dots ,{f}_{n}$, we are to find a set of k sequences ${g}_{1},\dots ,{g}_{k}$, such that the sum of distances
is minimized. In this paper we consider the case where $dist$ is the minimum sum of squared Euclidean distances between each of the points in the two sequences ${f}_{i}$ and ${g}_{k}$, under all possible rigid transformations on the sequences of points. A cost function in the form of the squared Euclidean distance is used in many techniques for clustering 3D points [1]. Since our clustering problem is quite different from those previously studied, it calls for a new technique. (The “square" in the distance measure is to fulfill a condition needed by the method in this paper. The method does not work, for example, in the case of the root mean squared Euclidean distance. On the other hand, the method easily adapts to other distance measures that fulfill the required condition.)

$$\begin{array}{c}\hfill \sum _{1\le i\le n}\underset{1\le j\le k}{min}dist({f}_{i},{g}_{j})\end{array}$$

Such a problem has potential use in clustering protein structures. A protein structure is typically given as a sequence of points in 3D space, and for various reasons, there are typically minor variations in their measured structures. The problem can be considered a model of the situation where we have a set of measurements of a few protein structures, and are to reconstruct the original structures.

In this paper, we show that there is a polynomial-time approximation scheme (PTAS) for the problem, through a sampling strategy. More precisely, we show that an optimal solution obtained by sampling smaller subsets of the input suffices to give us an approximate solution, and the approximation ratio improves as we increase the size of the subsets we sample.

## 2. Preliminaries

Throughout this paper we let ℓ be a fixed non-zero natural number. A structural fragment is a sequence of ℓ 3D-points. The mean square distance ($MS$) between two structural fragments $f=\left(f\right[1],\dots ,f[\ell \left]\right)$ and $g=\left(g\right[1],\dots ,g[\ell \left]\right)$, is defined to be
where $\mathcal{R}$ is the set of all rotation matrices, $\mathcal{T}$ the set of all translation vectors, and $\parallel x-y\parallel $ is the Euclidean distance between $x,y\in {\mathbb{R}}^{3}$.

$$\begin{array}{c}\hfill MS(f,g)=\underset{R\in \mathcal{R},\tau \in \mathcal{T}}{min}\sum _{i=1}^{\ell}{\parallel f\left[i\right]-(R\xb7g\left[i\right]+\tau )\parallel}^{2}\end{array}$$

The root of the $MS$ measure, $RMS(f,g)=\sqrt{MS(f,g)}$ is a measure that has been extensively studied. Note that $R\in \mathcal{R}$, $\tau \in \mathcal{T}$ that minimize ${\sum}_{i=1}^{\ell}{\parallel f\left[i\right]-(R\xb7g\left[i\right]+\tau )\parallel}^{2}$ to give us $MS(f,g)$ will also give us $RMS(f,g)$, and vice versa. Since given any f and g, there are closed form equations [2,3] for finding R and τ that give $RMS(f,g)$, $MS(f,g)$ can be computed efficiently for any f and g.

Furthermore, it is known that to minimize ${\sum}_{i=1}^{\ell}{\parallel f\left[i\right]-(R\xb7g\left[i\right]+\tau )\parallel}^{2}$, the centroid of f and g must coincide [2]. Due to this, without loss of generality we assume that all structural fragments have centroids at the origin. Such transformations can be done in $O\left(n\ell \right)$ time. After such transformations, in computing $MS(f,g)$, only the parameter $R\in \mathcal{R}$ need to be considered, that is,

$$\begin{array}{c}\hfill MS(f,g)=\underset{R\in \mathcal{R}}{min}\sum _{i=1}^{\ell}\parallel f\left[i\right]-R\xb7g\left[i\right]{\parallel}^{2}\end{array}$$

Suppose that given a set of n structural fragments ${f}_{1},{f}_{2},\dots ,{f}_{n}$, we are to find k structural fragments ${g}_{1},\dots ,{g}_{k}$, such that each structural fragment ${f}_{i}$ is “near", in terms of the $MS$, to at least one of the structural fragments in ${g}_{1},\dots ,{g}_{k}$. We formulate such a problem as follows:

k-Consensus Structural Fragments Problem Under $MS$ | |

Input: | n structural fragments ${f}_{1},\dots {f}_{n}$, and a non-zero natural |

number $k<n$. | |

Output: | k structural fragments ${g}_{1},\dots {g}_{k}$, minimizing the cost |

${\sum}_{i=1}^{n}{min}_{1\le j\le k}MS({f}_{i},{g}_{j})$. |

In this paper we will demonstrate that there is a PTAS for the problem.

We use the following notations: Cardinality of a set A is written $\left|A\right|$. For a set A and non-zero natural number n, ${A}^{n}$ denotes the set of all length n sequences of elements of A. Let elements in a set A be indexed, say $A=\{{f}_{1},{f}_{2},\dots ,{f}_{n}\}$, then ${A}^{m!}$ denotes the set of all the length m sequences ${f}_{{i}_{1}},{f}_{{i}_{2}},\dots ,{f}_{{i}_{m}}$, where $1\le {i}_{1}\le {i}_{2}\le \dots \le {i}_{m}\le n$. For a sequence S, $S\left(i\right)$ denotes the i-th element in S, and $\left|S\right|$ denotes its length.

## 3. PTAS for the k-Consensus Structural Fragments

The following lemma, from [4], is central to the method.

**Lemma 1**([4]) Let ${a}_{1},{a}_{2},\dots ,{a}_{n}$ be a sequence of real numbers and let $r\in N$, $1\le r\le n$. Then the following equation holds:

$$\begin{array}{c}\hfill \frac{1}{{n}^{r}}\sum _{1\le {i}_{1},{i}_{2},\dots ,{i}_{r}\le n}\sum _{i=1}^{n}{(\frac{{a}_{{i}_{1}}+{a}_{{i}_{2}}+\cdots +{a}_{{i}_{r}}}{r}-{a}_{i})}^{2}=\frac{r+1}{r}\sum _{i=1}^{n}{(\frac{{a}_{1}+{a}_{2}+\cdots +{a}_{n}}{n}-{a}_{i})}^{2}\end{array}$$

Let ${P}_{1}=({x}_{1},{y}_{1},{z}_{1}),{P}_{2}=({x}_{2},{y}_{2},{z}_{2}),\dots ,{P}_{n}=({x}_{n},{y}_{n},{z}_{n})$ be a sequence of 3D points.

$$\begin{array}{ccc}& & \frac{1}{{n}^{r}}\sum _{1\le {i}_{1},{i}_{2},\dots ,{i}_{r}\le n}\sum _{i=1}^{n}{\parallel \frac{{P}_{{i}_{1}}+{P}_{{i}_{2}}+\cdots +{P}_{{i}_{r}}}{r}-{P}_{i}\parallel}^{2}\hfill \end{array}$$

$$\begin{array}{ccc}& =& \frac{1}{{n}^{r}}\sum _{1\le {i}_{1},\dots ,{i}_{r}\le n}\sum _{i=1}^{n}{(\frac{{x}_{{i}_{1}}+\dots +{x}_{{i}_{r}}}{r}-{x}_{i})}^{2}+{(\frac{{y}_{{i}_{1}}+\dots +{y}_{{i}_{r}}}{r}-{y}_{i})}^{2}+{(\frac{{z}_{{i}_{1}}+\dots +{z}_{{i}_{r}}}{r}-{z}_{i})}^{2}\hfill \end{array}$$

$$\begin{array}{ccc}& =& \frac{r+1}{r}\sum _{i=1}^{n}{(\frac{{x}_{1}+\dots +{x}_{n}}{n}-{x}_{i})}^{2}+{(\frac{{y}_{1}+\dots +{z}_{n}}{n}-{z}_{i})}^{2}+{(\frac{{z}_{1}+\dots +{z}_{n}}{n}-{z}_{i})}^{2}\hfill \end{array}$$

$$\begin{array}{ccc}& =& \frac{r+1}{r}\sum _{i=1}^{n}{\parallel \frac{{P}_{1}+{P}_{2}+\cdots +{P}_{n}}{n}-{P}_{i}\parallel}^{2}\hfill \end{array}$$

One can similarly extend the equation for structural fragments. Let ${f}_{1},\dots ,{f}_{n}$ be n structural fragments, the equation becomes:

$$\begin{array}{c}\hfill \frac{1}{{n}^{r}}\sum _{1\le {i}_{1},\dots ,{i}_{r}\le n}\sum _{i=1}^{n}{\parallel \frac{{f}_{{i}_{1}}+\cdots +{f}_{{i}_{r}}}{r}-{f}_{i}\parallel}^{2}=\frac{r+1}{r}\sum _{i=1}^{n}\parallel \frac{{f}_{1}+\cdots +{f}_{n}}{n}-{f}_{i}{\parallel}^{2}\end{array}$$

The equation says that there exists a sequence of r structural fragments ${f}_{{i}_{1}},{f}_{{i}_{2}},\dots ,{f}_{{i}_{r}}$ such that

$$\begin{array}{ccc}\hfill \sum _{i=1}^{n}{\parallel \frac{{f}_{{i}_{1}}+\cdots +{f}_{{i}_{r}}}{r}-{f}_{i}\parallel}^{2}& \le & \frac{r+1}{r}\sum _{i=1}^{n}\parallel \frac{{f}_{1}+\cdots +{f}_{n}}{n}-{f}_{i}{\parallel}^{2}\hfill \end{array}$$

Our strategy uses this fact —in essentially the same way as in [4]— to approximate the optimal solution for the k-consensus structural fragments problem. That is, by exhaustively sampling every combination of k sequences, each of r elements from the space ${\mathcal{R}}^{\prime}\times \{{f}_{1},\dots ,{f}_{n}\}$, where ${f}_{1},\dots ,{f}_{n}$ is the input and ${\mathcal{R}}^{\prime}$ is a fixed selected set of rotations, which we next discuss.

#### 3.1. Discretized Rotation Space

Any rotation can be represented by a normalized vector u and a rotation angle θ, where u is the axis about which an object is rotated by θ. If we apply $(u,\theta )$ to a vector v, we obtain vector $\widehat{v}$, which is:
where · represents dot product, and × represent cross product.

$$\begin{array}{c}\hfill \widehat{v}=u(v\xb7u)+(v-w(v\xb7w))cos\theta +(v\times w)sin\theta \end{array}$$

By the equation, one can verify that a change of ϵ in u will result in a change of at most ${\alpha}_{1}\u03f5\left|v\right|$ in $|\widehat{v}|$ for some computable ${\alpha}_{1}\in \mathbb{R}$; and a change of ϵ in θ will result in a change of at most ${\alpha}_{2}\u03f5\left|v\right|$ in $|\widehat{v}|$ for some computable ${\alpha}_{2}\in \mathbb{R}$. Now any rotation along an axis through the origin can be written in the form $({\theta}_{1},{\theta}_{2},{\theta}_{3})$, where ${\theta}_{1},{\theta}_{2},{\theta}_{3}\in [0,2\pi )$ are respectively a rotation along each of the $x,y,z$ axes. Similarly, changes of ϵ in ${\theta}_{1}$, ${\theta}_{2}$ and ${\theta}_{3}$ will result in a change of at most $\alpha \u03f5\left|v\right|$, for some computable $\alpha \in \mathbb{R}$.

We discretize the values that each ${\theta}_{i}$, $1\le i\le 3$ may take within the range $[0,2\pi )$ into a series of angles of angular difference ϑ. There are hence at most $O(1/\vartheta )$ of such values for each ${\theta}_{i}$, $1\le i\le 3$. Let ${\mathcal{R}}^{\prime}$ denote the set of all possible discretized rotations $({\theta}_{1},{\theta}_{2},{\theta}_{3})$. Note that $|{\mathcal{R}}^{\prime}|$ is of order $O(1/{\vartheta}^{3})$.

Let $\mathit{d}$ be the diameter of a ball that is able to encapsulate each of ${f}_{1},{f}_{2},\dots ,{f}_{n}$. Hence any distance between two points among ${f}_{1},\dots ,{f}_{n}$ is at most $\mathit{d}$. In this paper we assume

**d**to be constant with respect to the input size. Note that for a protein structure, $\mathit{d}$ is of order $O\left(\ell \right)$ [5]. For any $b\in \mathbb{R}$, we can choose ϑ so small that for any rotation R and any point $p\in {\mathbb{R}}^{3}$, there exists ${R}^{\prime}\in {\mathcal{R}}^{\prime}$ such that $\parallel R\xb7p-{R}^{\prime}\xb7p\parallel \phantom{\rule{4pt}{0ex}}\le \alpha \vartheta \mathit{d}\le b$.#### 3.2. A Polynomial-time Algorithm With Cost $((1+\u03f5){D}_{opt}+c)$

Our algorithm for the k-consensus structural fragments problem is summarized in Table 1.

This is what the algorithm does: In (2), we explore m distinct subsets ${A}_{1},\dots ,{A}_{m}$ from ${f}_{1},\dots ,{f}_{n}$, in the hope that each subset is from a distinct cluster in the optimal clustering. Since we explore all possible such subsets this is bound to happen. We then try to evaluate the score of each subset ${A}_{j}$ by sampling up to r structural fragments (allowing repeats) from it (from (2.1) onwards). Such an evaluation is possible due to Equation 7. The evaluation also requires us to exhaustively try out all possible transformations in ${\mathcal{R}}^{\prime}$, which is what we try to do in (2.2). Each of these samplings of ${A}_{j}$ produces a consensus structural fragment ${u}_{j}$ for ${A}_{j}$ in (2.3), the score of which is evaluated in (2.4). Finally in (3), we output the consensus patterns ${u}_{1},\dots ,{u}_{m}$ which give us the best score.

We now analyze the runtime complexity of the algorithm. Consider the number of ${F}_{1},{F}_{2},\dots ,{F}_{m}$ in (2.1) that are possible. Let each ${F}_{j}$ be represented by a length r string of $n+1$ symbols, n of which each represents one of ${f}_{1},\dots ,{f}_{n}$, while the remaining symbol represents “nothing". It is clear that for any ${A}_{j}$, any ${F}_{j}\in {A}_{j}^{r!}$, or ${F}_{j}\in {A}_{j}^{|{A}_{j}|!}$ (where $|{A}_{j}|\le r$), can be represented by one such string. Furthermore, any ${F}_{1},{F}_{2},\dots ,{F}_{m}$ can be completely represented by k such strings — that is, to represent the case where $m<k$, $k-m$ strings can be set to “nothing" completely. From this, we can see that there are at most ${(n+1)}^{rk}=O\left({n}^{rk}\right)$ possible combinations of ${F}_{1},{F}_{2},\dots ,{F}_{m}$.

For each of these combinations, there are $|{\mathcal{R}}^{\prime}{|}^{rk}$ possible combinations of ${\Theta}_{1},{\Theta}_{2},\dots ,{\Theta}_{m}$ at (2.2), hence resulting in $O\left(\right(n|{\mathcal{R}}^{\prime}{\left|\right)}^{rk})$ iterations to run for (2.3) to (2.5). Since (2.3) can be done in $O\left(rk\ell \right)$, (2.4) in $O\left(nk\right|{\mathcal{R}}^{\prime}\left|\ell \right)$, and (2.5) in $O\left(n\right)$ time, the algorithm completes in $O\left(k\ell \right(r+n|{\mathcal{R}}^{\prime}\left|\right)\left(n\right|{\mathcal{R}}^{\prime}{\left|\right)}^{rk})$ time.

We argue that ${D}_{min}$ eventually is at most $(r+1)/r$ of the optimal solution plus a factor. Suppose the optimal solution results in the $m\le k$ disjoint clusters ${\mathbf{A}}_{1},{\mathbf{A}}_{2},\dots ,{\mathbf{A}}_{m}\subseteq \{{f}_{1},\dots ,{f}_{n}\}$.

For each ${\mathbf{A}}_{j}$, $1\le j\le m$, let ${\mathit{u}}_{j}$ be a structural fragment which minimizes ${\sum}_{f\in {\mathbf{A}}_{j}}MS({\mathit{u}}_{j},f)$. Furthermore, for each $f\in {\mathbf{A}}_{j}$, let ${R}_{f}$ be a rotation where
and let

$$\begin{array}{c}\hfill {R}_{f}\in arg\underset{R\in \mathcal{R}}{min}{\parallel {\mathit{u}}_{j}-R\xb7f\parallel}^{2}\end{array}$$

$$\begin{array}{c}\hfill {\mathbf{D}}_{j}=\sum _{f\in {\mathbf{A}}_{j}}\parallel {\mathit{u}}_{j}-{R}_{f}\xb7f{\parallel}^{2}\phantom{\rule{4pt}{0ex}}(\mathrm{Hence}\mathrm{the}\mathrm{optimal}\mathrm{cost},\mathbf{D}=\sum _{j=1}^{m}{\mathbf{D}}_{j}.)\end{array}$$

By the property of the $MS$ measure, it can be shown that ${\mathit{u}}_{j}$ is the average of $\{{R}_{f}\xb7f\mid f\in {\mathbf{A}}_{j}\}$. For each ${\mathbf{A}}_{j}$ where $|{\mathbf{A}}_{j}|>r$, by Equation 6,

$$\begin{array}{ccc}\hfill \frac{1}{|{\mathbf{A}}_{j}{|}^{r}}\sum _{{F}_{j}\in {\mathbf{A}}_{j}^{r}}\sum _{f\in {\mathbf{A}}_{j}}{\parallel \frac{{R}_{{F}_{j}\left(1\right)}\xb7{F}_{j}\left(1\right)+\cdots +{R}_{{F}_{j}\left(r\right)}\xb7{F}_{j}\left(r\right)}{r}-{R}_{f}\xb7f\parallel}^{2}& =& \frac{r+1}{r}\phantom{\rule{4pt}{0ex}}{\mathbf{D}}_{j}\hfill \end{array}$$

For each such ${\mathbf{A}}_{j}$, let ${\mathit{F}}_{j}\in {\mathbf{A}}_{j}^{r}$ be such that

$$\begin{array}{ccc}\hfill \sum _{f\in {\mathbf{A}}_{j}}{\parallel \frac{{R}_{{\mathit{F}}_{j}\left(1\right)}\xb7{\mathit{F}}_{j}\left(1\right)+\cdots +{R}_{{\mathit{F}}_{j}\left(r\right)}\xb7{\mathit{F}}_{j}\left(r\right)}{r}-{R}_{f}\xb7f\parallel}^{2}& \le & \frac{r+1}{r}\phantom{\rule{4pt}{0ex}}{\mathbf{D}}_{j}\hfill \end{array}$$

Without loss of generality assume that each ${\mathit{F}}_{j}\in {\mathbf{A}}_{j}^{r!}$. Let

$$\begin{array}{c}\hfill {\mu}_{j}=\left\{\begin{array}{cc}\frac{{R}_{{\mathit{F}}_{j}\left(1\right)}\xb7{\mathit{F}}_{j}\left(1\right)+\cdots +{R}_{{\mathit{F}}_{j}\left(r\right)}\xb7{\mathit{F}}_{j}\left(r\right)}{r}\hfill & \mathrm{if}|{\mathbf{A}}_{j}|r\hfill \\ \frac{{R}_{{\mathit{F}}_{j}\left(1\right)}\xb7{\mathit{F}}_{j}\left(1\right)+\cdots +{R}_{{\mathit{F}}_{j}\left(\right|{\mathbf{A}}_{j}\left|\right)}\xb7{\mathit{F}}_{j}\left(\right|{\mathbf{A}}_{j}\left|\right)}{|{\mathbf{A}}_{j}|}\hfill & \mathrm{otherwise}\hfill \end{array}\right.\end{array}$$

Then we may write,

$$\begin{array}{ccc}\hfill \sum _{j=1}^{m}\sum _{f\in {\mathbf{A}}_{j}}{\parallel {\mu}_{j}-{R}_{f}\xb7f\parallel}^{2}& \le & \frac{r+1}{r}\phantom{\rule{4pt}{0ex}}\mathbf{D}\hfill \end{array}$$

For each rotation ${R}_{f}$, let ${R}_{f}$ be a closest rotation to ${R}_{f}$ within ${\mathcal{R}}^{\prime}$. Also, let
$$\begin{array}{c}\hfill {\mu}_{j}=\left\{\begin{array}{cc}\frac{{R}_{{\mathit{F}}_{j}\left(1\right)}\xb7{\mathit{F}}_{j}\left(1\right)+\cdots +{R}_{{\mathit{F}}_{j}\left(r\right)}\xb7{\mathit{F}}_{j}\left(r\right)}{r}\hfill & \mathrm{if}|{\mathbf{A}}_{j}|r\hfill \\ \frac{{R}_{{\mathit{F}}_{j}\left(1\right)}\xb7{\mathit{F}}_{j}\left(1\right)+\cdots +{R}_{{\mathit{F}}_{j}\left(\right|{\mathbf{A}}_{j}\left|\right)}\xb7{\mathit{F}}_{j}\left(\right|{\mathbf{A}}_{j}\left|\right)}{|{\mathbf{A}}_{j}|}\hfill & \mathrm{otherwise}\hfill \end{array}\right.\end{array}$$

Since we exhaustively sample all possible ${F}_{j}\in {A}_{j}^{r!}$ for all possible ${A}_{j}$ and for all $R\in {\mathcal{R}}^{\prime}$, it is clear that:

$$\begin{array}{ccc}\hfill {D}_{min}& \le & \sum _{j=1}^{m}\sum _{f\in {\mathbf{A}}_{j}}{\parallel {\mu}_{j}-{R}_{f}\xb7f\parallel}^{2}\hfill \end{array}$$

We will now relate the LHS of Equation 14 with the RHS of Equation 16. The RHS of Equation 16 is

$$\begin{array}{ccc}& & \sum _{j=1}^{m}\sum _{f\in {\mathbf{A}}_{j}}{\parallel {\mu}_{j}-{R}_{f}\xb7f\parallel}^{2}\hfill \end{array}$$

$$\begin{array}{ccc}& =& \sum _{j=1}^{m}\sum _{f\in {\mathbf{A}}_{j}}{\parallel {\mu}_{j}+({\mu}_{j}-{\mu}_{j})+({R}_{f}\xb7f-{R}_{f}\xb7f)-{R}_{f}\xb7f\parallel}^{2}\hfill \end{array}$$

$$\begin{array}{ccc}& \le & \sum _{j=1}^{m}\sum _{f\in {\mathbf{A}}_{j}}{(\parallel {\mu}_{j}-{R}_{f}\xb7f\parallel +(\parallel {\mu}_{j}-{\mu}_{j}\parallel +\parallel {R}_{f}\xb7f-{R}_{f}\xb7f\parallel ))}^{2}\hfill \end{array}$$

$$\begin{array}{ccc}& =& \sum _{j=1}^{m}\sum _{f\in {\mathbf{A}}_{j}}\parallel {\mu}_{j}-{R}_{f}\xb7f{\parallel}^{2}+{(\parallel {\mu}_{j}-{\mu}_{j}\parallel +\parallel {R}_{f}\xb7f-{R}_{f}\xb7f\parallel )}^{2}\hfill \end{array}$$

$$\begin{array}{ccc}& & \phantom{\rule{55.0pt}{0ex}}+2\parallel {\mu}_{j}-{R}_{f}\xb7f\parallel (\parallel {\mu}_{j}-{\mu}_{j}\parallel +\parallel {R}_{f}\xb7f-{R}_{f}\xb7f\parallel )\hfill \end{array}$$

$$\begin{array}{ccc}& \le & \sum _{j=1}^{m}\sum _{f\in {\mathbf{A}}_{j}}\parallel {\mu}_{j}-{R}_{f}\xb7f{\parallel}^{2}+\phantom{\rule{4pt}{0ex}}8n\ell b\hfill \end{array}$$

Hence by Equation 14, ${D}_{min}$ is at most $(r+1)/r=1+1/r$ of the optimal solution plus a factor $c=8n\ell b$. Let $\u03f5=1/r$,

**Theorem 2**For any $c,\u03f5\in \mathbb{R}$, a $((1+\u03f5){D}_{opt}+c)$-approximation solution for the k-consensus structural fragments problem can be computed in

$$O\left(k\ell \right(\frac{1}{\u03f5}+n|{\mathcal{R}}^{\prime}\left|\right)\left(n\right|{\mathcal{R}}^{\prime}{\left|\right)}^{\frac{k}{\u03f5}})$$

The factor c in Theorem 2 is due to error introduced by the use of discretization in rotations. If we are able to estimate a lower bound of ${D}_{opt}$, we can scale this error by refining the discretization such that c is an arbitrarily small factor of ${D}_{opt}$. To do so, in the next section we show a lower bound to ${D}_{opt}$.

#### 3.3. A Polynomial-time 4-approximation Algorithm

We now show a 4-approximation algorithm for the k-consensus structural fragments problem. We first show the case for $k=1$, and then generalizes the result to all $k\ge 2$.

Let the input n structural fragments be ${f}_{1}$, ${f}_{2}$, …, ${f}_{n}$. Let ${f}_{a}$, $1\le a\le n$ be the structural fragment where
is minimized. Note that ${f}_{a}$ can be found in time $O\left({n}^{2}\ell \right)$, since for any $1\le i,j\le n$, $MS({f}_{i},{f}_{j})$ (more precisely, $RMS({f}_{i},{f}_{j})$) can be computed in time $O\left(\ell \right)$ using closed form equations from [3].

$$\sum _{1\le j\le n\wedge j\ne a}MS({f}_{a},{f}_{j})$$

We argue that ${f}_{a}$ is a 4-approximation. Let the optimal structural fragment be ${f}_{opt}$, the corresponding distance ${D}_{opt}$, and let ${f}_{b}$ ($1\le b\le n$) be the fragment where $MS({f}_{b},{f}_{opt})$ is minimized.

We first note that the cost of using ${f}_{a}$ as solution, ${\sum}_{i\ne a}MS({f}_{a},{f}_{i})\le {\sum}_{i\ne b}MS({f}_{b},{f}_{i})$. To continue we first establish the following claim.

**Claim 1**$MS(f,{f}^{\prime})\le 2(MS(f,{f}^{\prime \prime})+MS({f}^{\prime \prime},{f}^{\prime}))$.

PROOF. In [6], it is shown that
Squaring both sides gives
Since
we have $MS(f,{f}^{\prime})\le 2(MS(f,{f}^{\prime \prime})+MS({f}^{\prime \prime},{f}^{\prime}))$. ▮

$$\begin{array}{c}\hfill RMS(f,{f}^{\prime})\le RMS(f,{f}^{\prime \prime})+RMS({f}^{\prime \prime},{f}^{\prime})\end{array}$$

$$\begin{array}{c}\hfill MS(f,{f}^{\prime})\le MS(f,{f}^{\prime \prime})+MS({f}^{\prime \prime},{f}^{\prime})+2RMS(f,{f}^{\prime \prime})RMS({f}^{\prime \prime},{f}^{\prime})\end{array}$$

$$\begin{array}{c}\hfill 2RMS(f,{f}^{\prime \prime})RMS({f}^{\prime \prime},{f}^{\prime})\le MS(f,{f}^{\prime \prime})+MS({f}^{\prime \prime},{f}^{\prime})\end{array}$$

By the above claim,

$$\begin{array}{ccc}\hfill \sum _{i\ne b}MS({f}_{b},{f}_{i})& \le & 2\sum _{i\ne b}(MS({f}_{b},{f}_{opt})+MS({f}_{opt},{f}_{i}))\hfill \end{array}$$

$$\begin{array}{ccc}& =& 2\sum _{i\ne b}MS({f}_{b},{f}_{opt})+2\sum _{i\ne b}MS({f}_{i},{f}_{opt})\hfill \end{array}$$

$$\begin{array}{ccc}& \le & 2\sum _{i\ne b}MS({f}_{b},{f}_{opt})+2{D}_{opt}\hfill \end{array}$$

$$\begin{array}{ccc}& \le & 2\sum _{j\ne b}MS({f}_{j},{f}_{opt})+2{D}_{opt}\hfill \end{array}$$

$$\begin{array}{ccc}& \le & 2{D}_{opt}+2{D}_{opt}=4{D}_{opt}\hfill \end{array}$$

Hence ${\sum}_{i\ne a}MS({f}_{a},{f}_{i})\le 4{D}_{opt}$. We now extend this to k structural fragments.

We first pre-compute $MS(f,{f}^{\prime})$ for every pair of $f,{f}^{\prime}\in S$, which takes time $O\left({n}^{2}\ell \right)$. Then, at step (1), there are at most $O\left({n}^{k}\right)$ combinations of A, each which takes $O\left(nk\right)$ time to compute at step (2). Hence in total we can perform the computation in $O({n}^{2}\ell +k{n}^{k+1})$ time. To see that the solution is a 4-approximation, let ${S}_{1},{S}_{2},\dots ,{S}_{m}$ where $m\le k$ be an optimal clustering. Then, by our earlier argument, there exists ${f}_{{i}_{1}}\in {S}_{1}$, ${f}_{{i}_{2}}\in {S}_{2}$, …, ${f}_{{i}_{m}}\in {S}_{m}$ such that each ${f}_{{i}_{x}}$ is a 4-approximation for ${S}_{x}$, and hence ${f}_{{i}_{1}},{f}_{{i}_{2}},\dots ,{f}_{{i}_{m}}$ is a 4-approximation for the k-consensus structural fragments problem. Since the algorithm exhaustively search for every combination of up to k fragments, it gives a solution at least as good as ${f}_{{i}_{1}},{f}_{{i}_{2}}\dots ,{f}_{{i}_{m}}$, and hence is a 4-approximation algorithm.

**Theorem 3**A 4-approximation solution for the k-consensus structural fragments problem can be computed in $O({n}^{2}\ell +k{n}^{k+1})$ time.

#### 3.4. A $(1+\u03f5)$ Polynomial-time Approximation Scheme

Recall that the algorithm in Section 3.2 has cost $D\le (1+\u03f5){D}_{opt}+\phantom{\rule{4pt}{0ex}}8n\ell b$ where $b=\alpha \vartheta \mathbf{d}$. From Section 3.3 we have a lower bound

**D**${}_{opt}$ of ${D}_{opt}$. We want $8n\ell b\le \u03f5{\mathbf{D}}_{opt}\le \u03f5{D}_{opt}$. To do so, it suffices that we set $\vartheta \le \u03f5{\mathbf{D}}_{opt}/\left(8n\ell \alpha \mathbf{d}\right)$. This results in an $|{\mathcal{R}}^{\prime}|$ of order $O(1/{\vartheta}^{3})=O\left({\left(n\ell \mathbf{d}\right)}^{3}\right)$. Substituting this in Theorem 2, and combining with Theorem 3, we get the following.**Theorem 4**For any $\u03f5\in \mathbb{R}$, a $\left((1+\u03f5){D}_{opt}\right)$-approximation solution for the k-consensus structural fragments problem can be computed in

$$O({n}^{2}\ell +k{n}^{k+1}+k\ell (\frac{2}{\u03f5}+n\lambda ){\left(n\lambda \right)}^{\frac{2k}{\u03f5}})$$

## 4. Discussions

The method in this paper depends on Lemma 1. For this reason, the technique does not extend to the problem under distance measures where Lemma 1 cannot be applied, for example, the $RMS$ measure. However, should Lemma 1 apply to a distance measure, it should be easy to adapt the method here to solve the problem for that distance measure.

One can also formulate variations of the k-consensus structural fragments problem. For example,

While the cost function of the k-consensus structural fragments problem resembles that of the k-means problem, the cost function of the k-closest structural fragments resembles that of the (absolute) k-center problem. One interesting problem for future study is whether this problem has a PTAS or not. It is not clear how to generalize the technique employed in this paper to k-closest structural fragments problem under $MS$.

## References

- Jain, A. K.; Murty, M. N.; Flynn, P. J. Data clustering: a review. ACM Computing Surveys
**1999**, 31(3), 264–323. [Google Scholar] [CrossRef] - Arun, K. S.; Huang, T. S.; Blostein, S. D. Least-squares fitting of two 3-d point sets. IEEE Trans. Pattern Anal. Mach. Intell.
**1987**, 9(5), 698–700. [Google Scholar] [CrossRef] [PubMed] - Umeyama, S. Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern Anal. Mach. Intell.
**1991**, 13(4), 376–380. [Google Scholar] [CrossRef] - Qian, J.; Li, S. C.; Bu, D.; Li, M.; Xu, J. Finding compact structural motifs. In Combinatorial Pattern Matching, 18th Annual Symposium, CPM 2007, London, Canada, July 9-11, 2007, Proceedings; Ma, B., Zhang, K.Z., Eds.; Springer, 2007; Vol. 4580 of Lecture Notes in Computer Science, pp. 142–149. [Google Scholar]
- Hao, M.; Rackovsky, S.; Liwo, A.; Pincus, M.; Scheraga, H. Effects of compact volume and chain stiffness on the conformations of native proteins. Proc. Natl. Acad. Sci.
**1992**, 89, 6614–6618. [Google Scholar] [CrossRef] [PubMed] - Boris, S. A revised proof of the metric properties of optimally superimposed vector sets. Acta Crystallographica Section A
**2002**, 58(5), 506. [Google Scholar]

© 2008 by the authors; licensee MDPI, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).