1. Introduction
Since more and more data are being transmitted via the Internet, how to protect the privacy and security of the data such as military images becomes a focus. Although secret data can be protected by traditional encryption, it cannot be revealed exactly if the stegomedia is lossy. With a property of losstolerance, secret sharing(SS) techniques have been proposed. SS, also called secret division, was invented independently by Adi Shamir [
1] and George Blakley [
2] in 1979. A
$(k,n)$ threshold SS scheme is a method of encrypting a secret into
n shares such that any subset consisting of
k shares can reveal the secret, while less than
k shares cannot reconstruct the secret.
Based on the SS scheme, a secret image sharing (SIS) scheme was proposed. In the scheme, several shadow images (or shares) are generated by the secret image, and there will be no secret information leakage. In the recovery phase, a secret image can be recovered through partial shadow images, even if some of the shadow images are lost or damaged. Therefore, compared with other cryptographic techniques, the SIS scheme has the characteristic of loss tolerance. Because of its characteristic, there are lots of application scenarios for SIS, such as electronic voting, communications in unreliable public channels, distributed storage system and access control, etc.
At present, there are many kinds of SIS, and the two most important ones are polynomialbased SIS (PSIS) and visual cryptography scheme (VCS) [
3,
4,
5].
In 2002, Shamir’s polynomialbased scheme was adopted into SIS by Thien and Lin [
6]. The scheme encrypts the secret into the coefficients of a random
$(k1)$degree polynomial in a finite field. In the recovery phase, the secret can be reconstructed by Lagrange interpolation. VCS has a unique property that the secret information can be obtained by stacking the shadow images, and humans can easily recognize the secret information by eyes. However, since it is implemented based on OR operation, it has some disadvantages such as low visual quality of recovered images and lossy recovery, etc. In comparison with VCS, PSIS is more suitable for digital images, which can achieve secret image recovery with high visual quality.
Since PSIS can recover the secret image with a high quality, more properties of Shamir’s polynomialbased scheme were studied. Yang et al. in [
7,
8] made use of a polynomialbased scheme to achieve lossless recovery and obtained a twoinone SIS scheme. In addition, Li et al. in [
9] gained the lossless secret image and enhanced the contrast of the image meantime. When considering the case that some shadows with higher importance are essential, Ref. [
10] proposed a new
$(t,s,k,n)$ESIS scheme based on Shamir’s scheme, where essential shadows are more important than nonessential shadows. In addition, shadow images with diferent priorities [
11,
12,
13,
14,
15].Thus, Shamir’s polynomialbased scheme has been widely used in SIS [
16,
17,
18].
As a classic SS scheme, the polynomial interpolation is used to recover secret information in Shamir’s polynomialbased scheme. The secrets are encrypted into the constant coefficient of a random $(k1)$degree polynomial. In the recovery phase, the constant coefficient can be solved by Lagrange interpolation, and the coefficient is the value of the secret pixel.
In Shamir’s polynomialbased method, to divide the secret number s, a random $k1$ degree polynomial $f(x)={a}_{0}+{a}_{1}x+\dots +{a}_{k1}{x}^{k1}$ is constructed, in which ${a}_{0}=s$ and others are generated randomly in a finite field $Fp$. Then, it evaluates: $f(1),\dots ,f(i),\dots ,f(n)$, which are deserved as shares and distributed to associated participants.
Given any k of these $f(i)$ values $(i=1,2,\dots ,n)$, we can obtain the coefficients $({a}_{0},{a}_{1},\dots ,{a}_{k1})$ of $f(x)$ by interpolation, and then $s={a}_{0}$ is evaluated.
Actually, the substance is to construct the polynomial for
$(k,n)$ threshold, in which any
k out of
n equations can solve the system and get the coefficients of the polynomial. Thus, we introduce matrix theory to review this problem in a wider perspective. Furthermore, based on matrix theory, we propose a general
$(k,n)$ threshold SIS construction method [
19]. Thus, there are two contributions in this paper:
 (1)
Based on the analysis of the polynomialbased method proposed by Shamir, we summarize the necessary and sufficient conditions of constructing the polynomial, which are the basis of $(k,n)$ threshold SIS.
 (2)
Based on matrix theory, we propose a general $(k,n)$ threshold SIS construction. The effectiveness of the proposed construction is indicated by experimental results and analyses.
The following main content of the paper is as follows:
Section 2 introduces some preliminary techniques as the basis of the proposed construction. In
Section 3, the proposed
$(k,n)$ threshold SIS construction method is presented in detail.
Section 4 gives experimental results and analyses. Finally,
Section 5 concludes this paper.
2. Preliminaries
Some preliminaries are given here as the basis of our work. The goal of $(k,n)$ threshold SIS is to share the secret image S into n shadow images $S{C}_{1},S{C}_{2},\dots ,S{C}_{n}$ in such a way that: (1) knowledge of any k or more shadow images makes S easily computable; (2) knowledge of any $k1$ or fewer shadow images leaves S completely undetermined.
First of all, Shamir’s polynomialbased scheme is given in
Section 2.1. Furthermore, we will introduce analysis of Shamir’s polynomialbased scheme based on matrix theory. At the end of this section, we propose the necessary and sufficient condition for
$(k,n)$ threshold SIS construction.
2.1. Shamir’s PolynomialBased Scheme
Shamir’s scheme is based on polynomial interpolation. The scheme encrypts the secret into the constant coefficient of a random $(k1)$degree polynomial in a finite field. In the recovery phase, the secret can be reconstructed by Lagrange interpolation. For example, we take a pixel value s as the gray value of the first secret pixel, and then to split s into n pixels corresponding to n shadows. The specific scheme is listed as follows:
 (1)
In the sharing phase, given a pixel value
s, we select a prime number
p, and
$p>max(n,s)$. In order to divide
s into pieces
$s{c}_{i}$, we generate a
$k1$ degree polynomial
in which
${a}_{0}=s$ and
${a}_{i}(i=1,\cdots ,k1)$ are randomly selected in the finite field
$D=Zp[0,p1]$, and then compute
and take
$(i,s{c}_{i})$ as a secret pair, where
i serves as an identifying index or a order lable and
$s{c}_{i}$ serves as a shared pixel value.
The process repeats until all pixels of the secret image are processed. In the end, n shadow images are generated.
 (2)
In the recovery phase, given any k pairs ${\{({i}_{j},s{c}_{{i}_{j}})\}}_{j=1}^{k},({i}_{1},{i}_{2},\cdots ,{i}_{k})\subseteq \{1,2,\cdots ,n\}$, we can reconstruct $f(x)$ by the Largrange’s interpolation, and then evaluate $s=f(0)$. Knowledge of just $k1$ of these values does not suffice in order to calculate s.
2.2. Analysis of Shamir’s Polynomial Based on Matrix Theory
In Shamir’s sharing polynomial shown in Equation (
1), equations in Equation (
2) are calculated in the sharing phase. In the recovery phase, when any
k participants with
k pairs get together, the polynomial
$f(x)$ can be reconstructed by solving the
k equations. Without loss of generality, we can assume that their pairs are
$(1,f(1)),(2,f(2)),\cdots ,(k,f(k)).$ Thus, we can get
k equations as follows:
Here, the parameter
k is fixed, and
${a}_{0},{a}_{1},\dots ,{a}_{k1}$ are unknown. Thus, Equation (
3) is a linear system with
k equations and
k unknowns. In another point of view, Equation (
3) is equivalent to the following vector where
$\mathbf{a}$ deserves as a variable:
According to Equation (
4), we can rewrite Equation (
3) as Equation (
5):
Actually, linear equations in Equation (
3) and vector equation in Equation (
4) is equivalent. In addition, solution and solution vector are indiscriminate.
Using the rank of the coefficient matrix
$\mathbf{K}$ and the augmented matrix
$\left(\mathbf{K},\mathbf{f}\right)$, we can easily discuss whether the linear system in Equation (
3) has a unique solution according to the following theorem.
Theorem 1. Assume that there is a kvariables linear equations $\mathbf{Ka}=\mathbf{f}$. The necessary and sufficient condition for a unique solution is: rank(K) = rank(K,f) = k.
According to this theorem, to solve the Equation (
3) and then get the coefficients of
$f(x)$, we must ensure that the rank of the coefficient matrix
$\mathbf{K}$ is
k.
In Shamir’s polynomialbased SS scheme, the coefficient matrix
$\mathbf{K}$ of the equations is:
The coefficient matrix is a Vandermonde matrix. Because the Vandermonde matrix has a property that the rank of any
k order submatrix is
k, the equation system in Shamir’s scheme is solvable with a unique solution [
20].
One of the efficient methods to get the sharing polynomial is to use the Langrange interpolation. More generally, considering that the recovery phase is equivalent to equation solving, we can use matrix theory to solve equations and obtain the coefficients. According to Equation (
5) and Theorem 1, we can solve
$\mathbf{a}$ through the inverse matrix of
k order submatrix.
2.3. The Design Rule of Generating the Coefficient Matrix of Sharing Polynomial
The analysis in
Section 2.2 proves that the principle of Shamir’s scheme is to select a Vandermonde matrix as the coefficient matrix to construct a polynomial and reconstruct the polynomial by Langrange interpolation.
In fact, Shamir’s sharing polynomial constructed by the Vandermonde matrix is only a special case of constructing a sharing polynomial satisfying a $(k,n)$ threshold. We can use a more general coefficient matrix to construct sharing polynomial equations, to encrypt the secret into n shares and to decrypt the secret by any k shares if and only if the coefficient matrix satisfies the following theorem:
Theorem 2 (Objective Theorem)
. Given an $n\times k$ matrix $\mathbf{K}$ and a vector $\mathbf{a}={({a}_{0},{a}_{1},\cdots ,{a}_{k})}^{T}$ in which ${a}_{0}=s$ and others are generated randomly, we can construct a linear system of equations $\mathbf{Ka}=\mathbf{f}$ to encrypt s into n shares. In the recovery phase, in order to reconstruct vector $\mathbf{a}$ by any $k\times k$ submatrix of $\mathbf{K}$ and corresponding shares, $\mathbf{K}$ must satisfy the following condition:
Any k row vectors of the coefficient matrix $\mathbf{K}$ are linearly independent.
The correctness of this theorem is obvious. Once the coefficient
$\mathbf{K}$ meets Theorem 2, the rank of any
k order submatrix of
$\mathbf{K}$ is
k. According to Theorem 1, kvariables’ linear equations
$\mathbf{Ka}=\mathbf{f}$ in Equation (
3) has a unique solution. Hence, a coefficient matrix satisfying Theorem 2 can be applied to construct the sharing polynomial. In addition, in the recovery phase, we can decrypt the shares to secret by solving the inverse matrix instead of using Langrange interpolation. Thus, a
$(k,n)$ threshold SIS scheme could be constructed.
The question now is how to construct the coefficient matrix satisfying the requirement in Theorem 2. In the following section, we will first introduce a coefficient matrix generation approach and further propose a general $(k,n)$ threshold SIS construction based on matrix theory.
3. The Proposed Construction Method
3.1. The Basic Idea
In order to construct the SIS, we first need to construct a matrix
$\mathbf{K}$ with size of
$n\times k$, which satisfies Theorem 2. Let the constructed matrix serve as the coefficient matrix shown in Equation (
4) and compute
$\mathbf{f}=\mathbf{Ka}$ to get shared pixel values of shadow images. The shadows are distributed to participants, and every row vector of
$\mathbf{K}$ is distributed to corresponding participants as well.
In the recovery phase, suppose that
k participants get together to reconstruct the sharing polynomial. After the polynomial is reconstructed, the secret is obtained by
${a}_{0}$. Next, we will introduce our method and proof in
Section 3.2, and we will also do a feasibility study showed in
Section 3.3.
Section 3.4 gives a detailed construction method of the proposed general
$(k,n)$ threshold SIS scheme.
3.2. Construction Method of the Coefficient Matrix $\mathbf{K}$
This part will show how to construct the matrix $\mathbf{K}$ satisfying Theorem 2. Based on the analysis, we summarize a construction method as Theorem 3. According to Theorem 3, the $n\times k$ coefficient matrix $\mathbf{K}$ is constructed by a special matrix $\mathbf{G}$, in which the determinant of all submatrices is nonzero.
Given matrix
$\mathbf{G}$ and
$\alpha =\left(\begin{array}{c}{\alpha}_{1}\hfill \\ {\alpha}_{2}\hfill \\ \vdots \hfill \\ {\alpha}_{k}\hfill \end{array}\right)$, and the
kdimensional row vectors (
${\alpha}_{1},{\alpha}_{2},\cdots ,{\alpha}_{k}$) are linearly independent. For example,
$\alpha $ can be a Vandermonde matrix. We note that
$\mathbf{G}$ and
$\alpha $ are generated by random assignment and validation. In addition, the feasibility analysis is given in
Section 3.3. Then, we compute
$\beta =\mathbf{G}\alpha .$ Thus, we can get another matrix
$\beta =\left(\begin{array}{c}{\beta}_{1}\hfill \\ {\beta}_{2}\hfill \\ \vdots \hfill \\ {\beta}_{k}\hfill \end{array}\right),$ in which the
kdimensional row vectors are linearly independent. Then, we create a new matrix
$\mathbf{K}$ by concatenating the two matrices
$\alpha $ and
$\beta $:
and the size of matrix
$\mathbf{K}$ is
$2k\times k$.
That is to say, any vector
${\beta}_{i}\phantom{\rule{4pt}{0ex}}(i=1,\cdots ,k)$ can be expressed linearly by row vectors in
$\alpha $. Gathering these linear expressed into a matrix form, we have
in which
$\mathbf{G}$ serves as a temporary coefficient matrix. By computing
$\beta =\mathbf{G}\alpha $ and concatenating
$\alpha $ and
$\beta $, we will get a matrix
$\mathbf{K}$ which satisfies Theorem 2.
We note that the row vector group of $\alpha $ and $\beta $ are all linearly independent. However, any k vectors selected between $\alpha $ and $\beta $ are not apparently linearly independent, which is the reason why we give Theorem 3 and the corresponding proof.
Theorem 3 (Conditional Theorem)
. Given a set of linearly independent kdimensional row vectors ${\alpha}_{1},{\alpha}_{2},\cdots ,{\alpha}_{k}$, which form a $k\times k$ matrix $\alpha =\left(\begin{array}{c}{\alpha}_{1}\hfill \\ {\alpha}_{2}\hfill \\ \vdots \hfill \\ {\alpha}_{k}\hfill \end{array}\right).$ Let matrix $\mathbf{G}$ satisfy that all the minors of matrix $\mathbf{G}$ are nonzero. Let $\mathbf{G}\alpha =\beta $ and $\mathbf{K}=\left(\begin{array}{c}\alpha \hfill \\ \beta \hfill \end{array}\right)=\left(\begin{array}{c}{\alpha}_{1}\hfill \\ \vdots \hfill \\ {\alpha}_{k}\hfill \\ {\beta}_{1}\hfill \\ \vdots \hfill \\ {\beta}_{k}\hfill \end{array}\right).$ Thus, we can conclude that any k vectors of the coefficient matrix $\mathbf{K}$ are linearly independent.
Proof. Select any k vectors to form $\mathbf{C}=\{{\chi}_{1},{\chi}_{2},\cdots ,{\chi}_{k}\}$. The aim is to prove that ${\chi}_{1},{\chi}_{2},\cdots ,{\chi}_{k}$ are linearly independent.
 (1)
For the case of $\mathbf{C}=\alpha $, since the vectors of $\alpha $ are linearly independent, the vectors of $\mathbf{C}$ are linearly independent.
 (2)
For the case of $\mathbf{C}=\beta $, since $\beta =\mathbf{G}\alpha $ and $\mathbf{G}$ is invertible, the vectors of $\mathbf{C}$ are linearly independent.
 (3)
For the case of
$\mathbf{C}\cap \alpha \ne \varnothing ,\mathbf{C}\cap \beta \ne \varnothing $, let
$\left\mathbf{C}\right.\cap \left.\alpha \right=\mathbf{s},\left\mathbf{C}\right.\cap \left.\beta \right=\mathbf{t}$, thus there are
s vectors in
$\mathbf{C}\cap \alpha $ and
t vectors in
$\mathbf{C}\cap \beta $ are linearly independent and
$s+t=k$. Without loss of generality, we assume that
$\mathbf{C}\cap \alpha =\{{\alpha}_{1},{\alpha}_{2},\cdots ,{\alpha}_{s}\}$ and
$\mathbf{C}\cap \beta =\left.\left\{{\beta}_{1},{\beta}_{2},\cdots ,{\beta}_{t}\right.\right\}$ in which
Consider the equation
according to Equations (
8) and (
9), we have
Since
$\alpha =\{{\alpha}_{1},{\alpha}_{2},\cdots ,{\alpha}_{s},{\alpha}_{s+1},\cdots ,{\alpha}_{k}\}$ are linearly independent, according to matrix theory, we get
which can be written as
Then, the size of
${\mathbf{G}}^{\prime}$ is
$t\times t$, and
t range from 1 to
k. Now, look at the coefficient matrix in Equation (
13) whose rank is
$s+rank({G}^{\prime})$. Since all the minors of matrix
$\mathbf{G}$ are full rank, the rank of
${\mathbf{G}}^{\prime}$ is
t, that is,
$rank({G}^{\prime})=t$. Thus, the determinant of the coefficient matrix in Equation (
13) is nonzero. Hence, from Equation (
13), we get:
Thus, according to Equation (
10),
${\alpha}_{1},{\alpha}_{2},\cdots ,{\alpha}_{s},{\beta}_{1},{\beta}_{2},\cdots ,{\beta}_{t}$ are linearly independent, that is to say,
${\chi}_{1},{\chi}_{2},\cdots ,{\chi}_{k}$ are linearly independent. □
Thanks to the matrix $\mathbf{G}$ has the special property, the matrix $\mathbf{K}$ has such a property: any k vectors of the coefficient matrix $\mathbf{K}$ are linearly independent. Because of that, we can construct a $(k,{n}_{x})$ threshold SS (${n}_{x}$ ranges from k to $2k$). For the purpose of simplicity, we assume a $(k,{n}_{x})$ as $(k,n)$ threshold in the rest of this paper, which is enough for real applications.
Based on the constructed coefficient matrix $\mathbf{K}$, we will introduce a method of constructing $(k,n)$ threshold SIS schemes.
3.3. Feasibility Analysis
Generating linearly independent vectors is a difficult problem in mathematics. However, the random search method can be a solution. Estimates of the density of matrix show that one could easily find the initial matrix $\alpha $ and the temporary matrix $\mathbf{G}$ satisfying Theorem 3.
Randomly select
n vectors in a set
$\{{\alpha}_{1},{\alpha}_{2},\cdots ,{\alpha}_{n}\}\in {Z}_{p}^{n}$; the probability of these vectors being linearly independent is
${P}_{r}$:
For example, when $p=23,{P}_{r}\ge 0.95;p=131,{P}_{r}\ge 0.992;p=251,{P}_{r}\ge 0.996$. Thus, when p is a big prime number, the probability of any n vectors being linearly independent is very high. That is to say, the initial matrix $\alpha $ of which the vectors are linearly independent is easily to be generated.
Furthermore, when taking into consideration that all the minors of an
$n\times n$ $\mathbf{G}$ are nonzero, we can analyze the probability by way of the method mentioned above. Let
${P}_{i}$ be the probability of that all the
$i\times i$ minors of
$\mathbf{G}$ (
i range from 1 to
n)are nonzero. Hence, we get:
Thus, we can get the value of
${P}_{r}$, the probability of that any minors of matrix
$\mathbf{G}$ is nonzero, as follows:
When
$p=251,n=3$,
which implies that we need average every
$1.08$ times to get such a matrix
$\mathbf{G}$ that satisfies our Theorem 3.
Thus, we can get randomly generate a qualified $\mathbf{G}$ matrix by average 40.58 times. This implies that the construction is feasible.
3.4. The Algorithms of Secret Image Sharing
At the beginning of this section, we first connect the theorems to each other. Theorem 1 is the necessary and sufficient condition that the linear equations $\mathbf{Ka}=\mathbf{f}$ have a unique solution. The objective theorem Theorem 2 is the principle that $\mathbf{Ka}$ must satisfy according to Theorem 1. The conditional theorem, Theorem 3, is the method of constructing coefficient matrix $\mathbf{Ka}$ to satisfy our objective theorem. Hence, we could get a general $(k,n)$ threshold SIS construction.
In this section, we use the constructed coefficient matrix $\mathbf{K}$ to achieve polynomialbased SIS. In what follows, the original grayscale secret image is represented by S, and the size is $M\times N.$ Without loss of generality, we split the first secret pixel s into n pixels corresponding to n shadows’ images.
3.4.1. The Sharing Phase
In the sharing phase, to divide the secret
s into pieces
$s{c}_{i}$, we generate a matrix
$\mathbf{K}$ constructed by the way mentioned in
Section 3.2. We select a prime number
p. We generate a vector
$\mathbf{a}=({a}_{0},{a}_{1},\cdots ,{a}_{k1})$ where
${a}_{0}=s$ and the others are
$k1$ random integer numbers generated in the finite field
$Zp=[0,p1]$. Then, compute
$\mathbf{Ka}=\mathbf{f}$ as follows:
$s{c}_{i}$ is distributed to the ith participant, as well as the corresponding ith row vector ${\mathbf{k}}_{\mathbf{i}}$ of the matrix $\mathbf{K}$. We take $({\mathbf{k}}_{\mathbf{i}},s{c}_{i})$ as a share, where ${\mathbf{k}}_{\mathbf{i}}$ serves as an identifying index or a key and $s{c}_{i}$ serves as a pixel value. The steps are described in Algorithm 1.
Algorithm 1. The proposed general $(k,n)$ threshold SIS construction by matrix theory for sharing phase 
Input: The threshold parameters ($k,n)$, a matrix $\mathbf{K}$ constructed by Theorem 3, a secret image S with size of $M\times N$ and a prime number p Output: n shadow images $S{C}_{1},S{C}_{2},\cdots S{C}_{n}$ 
Step 1: For every secret pixel s in each position $(i,j)\in \{(i,j)1\le i\le M,1\le j\le N\}$, repeat Step 2–3. Step 2: Generate a vector $\mathbf{a}=({a}_{0},{a}_{1},\cdots ,{a}_{k1})$, set $s={a}_{0}$, and generate others randomly in the finite domain $[0,p1]$. Step 3: Compute $\mathbf{f}=\mathbf{Ka}\phantom{\rule{0.277778em}{0ex}}(\mathrm{mod}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4pt}{0ex}}p)$, where $s{c}_{1}(i,j)=f(1),\cdots ,s{c}_{n}(i,j)=f(n)$. Step 3: Output n shadow images $S{C}_{1},S{C}_{2},\cdots S{C}_{n}$ 
The pseudo code of Algorithm 1 is presented as follows:
Algorithm 1 Matrix $(k,n,\mathbf{K},S,M,N,P)$ 
 1:
for$i=1$ to M do  2:
for $j=1$ to N do  3:
${a}_{0}=S[i,j]$  4:
generate $({a}_{0},{a}_{1},\cdots ,{a}_{k1})$ randomly  5:
$f=\mathbf{f}=\mathbf{Ka}\phantom{\rule{0.277778em}{0ex}}(\mathrm{mod}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4pt}{0ex}}p)$  6:
for $k=1$ to n do  7:
$s{c}_{k}(i,j)=f(k)$  8:
end for  9:
end for  10:
end for

Finally, according to Algorithm 1, the n shadow images are generated succesfully by the proposed SIS scheme based on matrix theory. It should be noted that we utilize the largest prime number less than 255; however, the grayscale pixel value of an image ranges from 0 to 256, so our construction is not totally lossless. However, it is still a high resolution SIS construction method.
3.4.2. The Recovery Phase
In the recovery phase, given any
k pairs
${\{({\mathbf{k}}_{{\mathbf{i}}_{\mathbf{j}}},S{C}_{{i}_{j}})\}}_{j=1}^{k},({i}_{1},{i}_{2},\cdots ,{i}_{k})\subseteq \{1,2,\cdots ,n\}$, we can concatenate
k vectors
${\mathbf{k}}_{{\mathbf{i}}_{\mathbf{j}}}$ to generate a submatrix
${\mathbf{K}}_{\mathbf{mini}}$. Thus, we can finally obtain the vector
$\mathbf{a}=({a}_{0},{a}_{1},\cdots ,{a}_{k1})$ by solving the following linear equation:
the secret pixel
s is the value of
${a}_{0}$. Note that all the calculations are performed in a finite field. The value of
${a}_{0}$ will not be solved if the number of linearly independent vectors is less than k. The specific recovery steps are shown in Algorithm 2.
Algorithm 2. The proposed general $(k,n)$ threshold SIS construction in matrix method in recovery phase 
Input: The k shadow images which are randomly selected from n secret shadow images $S{C}_{1},S{C}_{2},\cdots ,S{C}_{n}$ and corresponding k vectors ${\mathbf{k}}_{{\mathbf{i}}_{\mathbf{j}}}$ Output:The original secret image S 
Step 1: According to the k vectors ${\mathbf{k}}_{\mathbf{i}}$, concatenate k vectors ${\mathbf{k}}_{\mathbf{i}}$ to a matrix ${\mathbf{K}}_{\mathbf{mini}}$. Step 2: For each position $S(i,j)\in \{(i,j)1\le i\le M,1\le j\le N\}$, repeat Step 3–4. Step 3: According to the Equation (4), construct linear system. Step 4: Get the coefficient ${a}_{0}$ of $f(x)$ by computing linear system according to Equation (19), and set the pixel $S(i,j)={a}_{0}$. Step 5: Output the secret image S. 
The pseudo code of Algorithm 2 is presented as follows.
Algorithm 2 recover $(k,\mathbf{K},M,N,P)$ 
 1:
for$c=1$ to M do  2:
for $l=1$ to N do  3:
for $j=1$ to k do  4:
${\mathbf{K}}_{\mathbf{mini}}\left[j\right]={\mathbf{k}}_{{\mathbf{i}}_{\mathbf{j}}}$  5:
$SC\left[j\right]=S{C}_{{i}_{j}}[c,l]$  6:
$\mathbf{a}={\mathbf{K}}_{\mathbf{mini}}^{\mathbf{1}}\ast SC$  7:
$SC[c,l]={a}_{0}$  8:
end for  9:
end for  10:
end for

3.5. Complexity Evaluation
The algorithm complexity for decryption of Shamir’s scheme is $O\left(k{log}^{2}k\right)$, while k is the total number of shares participating in recovery. In addition, the algorithm complexity for equation system solution is $O\left({k}^{2}\right)\sim O\left({k}^{3}\right)$. Since the coefficient matrix is not a sparse matrix, the algorithm complexity of the proposed method is $O\left({k}^{3}\right)$, which is a little higher than that of Shamir.