#### 3.1. A Two-Party Solution

On the cryptographic side, our two-party solution mainly builds on three technical tools:

A **non-interactive non-malleable commitment scheme $\mathcal{C}$**, satisfying the following requirements:

- –
It is perfectly binding in the sense that every commitment can be decommitted to at most one value.

- –
It is non-malleable for multiple commitments. This means that an adversary who knows commitments to a polynomial sized set of values $\nu $, will not be able to output commitments to a polynomial sized set of values $\beta $ related to $\nu $ in a meaningful way. It is well-known that in the CRS model such a commitment scheme can be implemented by means of any IND-CCA2 secure public key encryption scheme, for instance.

A **family of universal hash functions $\mathcal{U}\mathcal{H}$** mapping triples consisting of two elements from G and a ${\mathsf{pid}}_{i}^{{s}_{i}}$-value onto a superpolynomial sized set ${\{0,1\}}^{L}$. A universal hash function $\mathrm{UH}$ will be selected by the CRS from this family.

A

**collision-resistant pseudorandom function family $\mathcal{F}={\left\{{F}^{\ell}\right\}}_{\ell \in \mathbb{N}}$**(see Katz and Shin [

28]). We assume

${F}^{\ell}={\left\{{F}_{\eta}^{\ell}\right\}}_{\eta \in {\{0,1\}}^{L}}$ to be indexed by

${\{0,1\}}^{L}$ and further denote by

${v}_{0}={v}_{0}\left(\ell \right)$ a publicly known value such that no ppt adversary can find two different indices

$\lambda \ne {\lambda}^{\prime}\in {\{0,1\}}^{L}$ such that

${F}_{\lambda}\left({v}_{0}\right)={F}_{{\lambda}^{\prime}}\left({v}_{0}\right)$. We further use another public value

${v}_{1},$ fulfilling the same requirement as

${v}_{0}$ for deriving the session key (this can also be included in the CRS—see [

28] for more details).

Our protocol builds on [

21], and for the security proof we have to assume that the underlying group

G (respectively, the family of groups

$G=G\left(\ell \right)$, indexed by the security parameter) satisfies a number of conditions. Besides assuming products and inverses of group elements to be computable by efficient (ppt) algorithms, we further assume

G to have a ppt computable canonical representation of elements. The latter allows us to identify group elements with their canonical representation. Furthermore, as in [

21], we need three algorithms to perform the computations occurring in a protocol execution:

$\mathsf{DomPar}$, the domain parameter generation algorithm, is a (stateless) ppt algorithm that, upon input of the security parameter ${1}^{\ell}$, outputs a finite sequence S of elements in G. The subgroup of G spanned by S, $\langle S\rangle $, will be publicly known. Note that, for the special case of applying our framework to a DDH-assumption, S specifies a public generator of a cyclic group.

$\mathsf{SamAut}$, the automorphism group sampling algorithm, is a (stateless) ppt algorithm that, upon input of the security parameter ${1}^{\ell}$ and a sequence S output by $\mathsf{DomPar}$, returns a description of an automorphism $\varphi $ on the subgroup $\langle S\rangle $, so that both $\varphi $ and ${\varphi}^{-1}$ can be efficiently evaluated. For example, for a cyclic group, $\varphi $ could be given as an exponent, or for an inner automorphism the conjugating group element could be specified.

$\mathsf{SamSub}$, the subgroup sampling algorithm, is a (stateless) ppt that, upon input of the security parameter ${1}^{\ell}$ and a sequence S output by $\mathsf{DomPar}$, returns a word $x\left(S\right)$ representing an element $x\in \langle S\rangle $. Intuitively, $\mathsf{SamSub}$ chooses a random $x\in \langle S\rangle $, so that it is hard to recognize x if we know elements of x’s orbit under $\mathrm{Aut}(\langle S\rangle )$. Thus, our protocol requires an explicit representation of x in terms of the generators S.

With this notation, we can now define a decision problem, whose supposed difficulty will be essential for our security proof. As usual, with the notation $o\leftarrow \mathsf{A}\left(i\right)$ we describe that algorithm $\mathsf{A}$ upon receiving input i outputs o:

**Definition** **6** (Decision Automorphism Application)

**.** Suppose that we have fixed a quadruple $(G,\mathsf{DomPar},\mathsf{SamAut},\mathsf{SamSub})$. Then the decision automorphism application

(DAA) assumption states that for all ppt algorithms $\mathcal{A}$ the advantage function ${\mathsf{Adv}}_{\mathcal{A}}^{\mathsf{DAA}}={\mathsf{Adv}}_{\mathcal{A}}^{\mathsf{DAA}}\left(\ell \right):=$is negligible.**Example** **1** (Building on decision Diffie–Hellman)**.** Let G be a finite cyclic group and $S:=\langle g\rangle $ a prime order subgroup with generator g of order q. If we let $\mathsf{SubSam}$ choose uniformly at random an exponent $x\in \{1,\dots ,q-1\}$ and $\mathsf{SamAut}$ uniformly at a random exponent $\varphi \in \{1,\dots ,q-1\}$, then the DAA problem just described can be recognized as polynomial-time equivalent to a decision Diffie–Hellman (DDH) problem:

**“DDH solution** ⇒ **DAA solution”:**When facing, the DAA problem, we obtain as input a tuple$(g,{g}^{y},{({g}^{{\varphi}_{i}},{g}^{x{\varphi}_{i}})}_{i=1,2})$where either$y=x$, or y has been chosen uniformly at random from$\{1,\dots ,q-1\}$—independently of x and the${\varphi}_{i}$s. Given a DDH oracle, we just query it with$(g,{g}^{y},{g}^{{\varphi}_{1}},{g}^{x{\varphi}_{1}})$to see with non-negligible success probability which is the case.

**“DDH solution** ⇒ **DAA solution”:**When facing the DDH problem, we obtain as input a tuple$(g,{g}^{{\varphi}_{1}},{g}^{x},{g}^{y})$, where either$y={\varphi}_{1}x\phantom{\rule{0.277778em}{0ex}}\mathrm{mod}\phantom{\rule{0.277778em}{0ex}}q$, or y has been chosen uniformly at random from$\{1,\dots ,q-1\}$—independently of x and${\varphi}_{1}$. Choosing another random${\varphi}_{2}\in \{1,\dots ,q-1\}$, we can compute the inputneeded for a DAA attacker. Running a successful DAA attacker with this input, we immediately obtain the desired DDH attacker.

A two-party key establishment protocol building on the DAA assumption is presented in

Figure 1. The figure describes the operations to be performed by instance

${\Pi}_{i}^{{s}_{i}}$ of

${U}_{i}$. For the sake of readability we name the users trying to establish a common key as

${U}_{0}$ and

${U}_{1}$, and here, as in the sequel, we often omit making explicit the identifiers

${s}_{i}$ of the instances

${\Pi}_{i}^{{s}_{i}}$ involved in the protocol execution and just write

${\mathsf{sid}}_{i}$ instead of

${\mathsf{sid}}_{i}^{{s}_{i}}$, for instance. The common reference string is denoted by

$\rho $, and for a commitment to a value

x involving random choices

r we write

${C}_{\rho}(x;r)$. Finally,

S denotes the subgroup generators which are to be fixed prior to the protocol execution by means of

$\mathsf{DomPar}$ (and may also be included in the CRS

$\rho $).

In the subsequent section we prove the following result:

**Proposition** **1** (Security of the Two-Party Protocol)

**.** Assume that for each ppt time algorithm $\mathcal{A}$, its advantage ${\mathsf{Adv}}_{\mathcal{A}}^{\mathsf{Sig}}$ of achieving an existential forgery under the adaptive chosen-message attack for the underlying signature scheme, and ${\mathsf{Adv}}_{\mathcal{A}}^{\mathsf{DAA}},$ its advantage of solving DAA, can be bounded by a negligible function (in ℓ). Then the protocol in Figure 1 is a correct and secure two-party key establishment protocol fulfilling key integrity.In

Figure 2, we describe the group key establishment protocol obtained from a given two party group key establishment protocol

$\mathsf{2}-\mathsf{AKE}$ via the compiler from [

22]. We note here that given the result of Proposition 1, we can apply [

22, Theorem 1] (which, as noted by Nam et al. in [

29] is only valid if the underlying two party construction fulfills integrity) to obtain our desired security result:

**Corollary** **1** (Security of the

n-Party Protocol)

**.** Denoting the two-party key establishment protocol in Figure 1 by $\mathsf{2}-\mathsf{AKE}$, the protocol described in Figure 2 is a secure group key establishment fulfilling key integrity.#### 3.2. Security Analysis for the Two-Party Case: Proof of Proposition 1

Correctness and Integrity. Due to the collision-resistance of the family $\mathcal{F}$, all oracles that accept with identical session identifier use the same index value $\mathrm{UH}\left(K\right)$ and therewith also obtain the same session key and have identical ${\mathsf{pid}}_{i}$-values with overwhelming probability.

Security. Let ${q}_{s}$ and ${q}_{t}$ denote the (polynomially bounded) number of adversarial queries to the $\mathsf{Send}$ and $\mathsf{Test}$ oracle, respectively.

We consider a simulator simulating all oracles and instances for the adversary. The proof is thus set up following a sequence of experiments or games, where from game to game the simulator’s behavior deviates from the previous in a certain controlled way. We follow standard notation and we denote by $\mathsf{Adv}(\mathcal{A},{G}_{i})$ the advantage of the adversary when confronted with Game i and by $\mathsf{Succ}(\mathcal{A},{\mathsf{G}}_{\mathsf{i}})$ the success probability of $\mathcal{A}$ winning in Game i. As usual, the security parameter will be denoted denoted by ℓ.

**Game 0.** All oracles are simulated as defined in the model. Thus, $\mathsf{Adv}(\mathcal{A},{G}_{0})$ is exactly ${\mathsf{Adv}}_{\mathcal{A}}$ and $\mathsf{Succ}(\mathcal{A},{G}_{0})$ is the probability of violating the security of our key exchange protocol.

**Game 1.** In this game, the simulator keeps a list with entries $(i,M,{\sigma}_{M})$ for every message M and corresponding signature ${\sigma}_{M}$ he has produced and returned to the adversary $\mathcal{A}$ in a Round 2 message following a $\mathsf{Send}$ query.

By

$\mathsf{Forge}$ we denote the event that

$\mathcal{A}$ queries the

$\mathsf{Send}$ oracle with a message

M containing a valid signature

${\sigma}_{M}$ of an uncorrupted principal

${U}_{i}$ and with

$(i,M,{\sigma}_{M})$ not being contained in the simulator’s list. If the event

$\mathsf{Forge}$ occurs, we abort the simulation and take the adversary

$\mathcal{A}$ for being successful in breaking the security of the protocol. Thus,

**Lemma** **1.** If the signature scheme used in the above protocol is existentially unforgeable under adaptive chosen-message attacks, then$P\left(\mathsf{Forge}\right)$is negligible:$P\left(\mathsf{Forge}\right)\le \left|\mathcal{P}\right|\xb7{\mathsf{Adv}}_{\mathcal{A}}^{\mathsf{Sig}}$.

**Proof.** Any ppt adversary $\mathcal{A}$ provoking the event $\mathsf{Forge}$ can be turned into an attacker against the underlying signature scheme by means of our simulator: The simulator obtains the public verification key $PK$ and access to a signing oracle. In the initialization phase of the protocol, the simulator assigns the key $PK$ uniformly at random to one of the at most $\left|\mathcal{P}\right|$ users the adversary can involve. Whenever during the subsequent simulation a signature for this user has to be generated, the simulator queries the signing oracle.

If $\mathcal{A}$ comes up with a message/signature pair that is not stored in the simulator’s list, the simulator returns this message as existential forgery. If $\mathcal{A}$ does not come up with such a message, the simulator outputs ⊥. Having chosen the party ${U}_{i}$ uniformly at random, the simulator’s success probability for an existential forgery is at least $1/\left|\mathcal{P}\right|\xb7P\left(\mathsf{Forge}\right)$, and we get $P\left(\mathsf{Forge}\right)\le \left|\mathcal{P}\right|\xb7{\mathsf{Adv}}_{\mathcal{A}}^{\mathsf{Sig}}$. □

Thus, from Equation (

1), we get

**Game 2.** Now the simulation of the $\mathsf{Test}$ oracle is modified, so that, on input of a fresh instance, it will always output an element selected uniformly at random in the key space. Thus, $\mathsf{Adv}(\mathcal{A},{G}_{2})=0.$

Suppose that $\mathcal{A}$ is able to distinguish between Game 2 and Game 1. We construct an attacker D, that breaks the DAA assumption and uses $\mathcal{A}$ as a black-box. The attacker D will start by setting up the instances with key pairs for the signature scheme and receive a DAA-instance as a challenge. Further, D will choose an index $a\in \{1,\dots ,{q}_{t}\}$ uniformly at random and select two values $u,v\in \{1,\dots ,{q}_{s}\}$ chosen independently and uniformly at random subject to the condition $u\ne v$. Then the adversary $\mathcal{A}$ is started. D will simulate the model as in Game 1 except for the $u\mathrm{th}$ and $v\mathrm{th}$ instance activated by the adversary $\mathcal{A}$ and the answers to the $\mathsf{Test}$ query. For the $u\mathrm{th}$ and $v\mathrm{th}$ instances activated by $\mathcal{A}$, the messages will be constructed from the DAA challenge. If these two instances do not end up in the same session, D aborts the simulation and starts anew. The same happens, if $\mathcal{A}$ does not query his $a\mathrm{th}$$\mathsf{Test}$ query to one of these two instances.

D will simulate the $\mathsf{Test}$ oracle as follows: The first $a-1$ queries of $\mathsf{Test}$ will be answered with the real session key, in the $a\mathrm{th}$ query, D will return the challenge, and from query $a+1$ on, D will always answer with a random element.

By a standard hybrid argument,

D will win the challenge in

$1/{q}_{t}$ of the cases where

$\mathcal{A}$ distinguished Game 1 and Game 2. Excluding the necessary aborts (namely, if the instances that were chosen were not those used in the

$a\mathrm{th}$ query of

$\mathsf{Test}$), we have:

Combining Equations (

2) and (

3) yields the desired negligible upper bound for

${\mathsf{Adv}}_{\mathcal{A}}$.