1. Introduction
Shor demonstrated that solving both the integer factorization problem (IFP) and the discrete logarithm problem (DLP) is theoretically tractable in polynomial time [
1]. Both Rivest–Shamir–Adleman (RSA) publickey cryptography and elliptic curve cryptography (ECC) are widely used, and their security depends on the IFP and DLP, respectively. In recent years, research and development of quantum computers have progressed rapidly. For example, noisy intermediatescale quantum computers are already in practical use. Since system migration takes time in general, preparation for migration to PQC is a significant issue. Research, development, and standardization projects for postquantum cryptography (PQC) are ongoing within these contexts. The PQC standardization process was started by the National Institute of Standards and Technology (NIST) in 2016 [
2]. Several cryptosystems have been proposed for the NIST PQC project, including latticebased, codebased, and hashbased cryptosystems. The multivariate publickey cryptosystem (MPKC) is one of the cryptosystems proposed for the NIST PQC project. At the end of the third round, NIST selected four candidates to be standardized, as shown in
Table 1, and moved four candidates to the fourthround evaluation, as shown in
Table 2. Moreover, NIST issued a request for proposals of digital signature schemes with short signatures and fast verification [
3]. MPKCs are often more efficient than other publickey cryptosystems, primarily digital signature schemes, as described in the subsequent paragraph; therefore, researching the security of MPKCs is still important.
An MPKC is basically an asymmetric cryptosystem that has a trapdoor oneway multivariate (quadratic) polynomial map
$\mathcal{F}$ over a finite field
${\mathbb{F}}_{q}$. Let
$\mathcal{F}$:
${\mathbb{F}}_{q}^{n}\to {\mathbb{F}}_{q}^{m}$ a (quadratic) polynomial map whose inverse can be computed easily, and two randomly selected invertible affine linear maps
$\mathcal{S}:{\mathbb{F}}_{q}^{n}\to {\mathbb{F}}_{q}^{n}$ and
$\mathcal{T}:{\mathbb{F}}_{q}^{m}\to {\mathbb{F}}_{q}^{m}$. The secret key consists of
$\mathcal{F}$,
$\mathcal{S}$, and
$\mathcal{T}$. The public key consists of the composite map
$\mathcal{P}$:
${\mathbb{F}}_{q}^{n}\to {\mathbb{F}}_{q}^{m}$ such that
$\mathcal{P}=\mathcal{T}\circ \mathcal{F}\circ \mathcal{S}$. The public key can be regarded as a set
$\mathcal{P}$ of
m (quadratic) polynomials in
n variables:
where each
${p}_{i}$ is a nonlinear (quadratic) polynomial.
Multivariate quadratic (MQ) problem: Find a solution $x=\left({x}_{1},\dots ,{x}_{n}\right)\in {\mathbb{F}}_{q}^{n}$ such that the system of (quadratic) polynomial equations: ${p}_{1}\left(x\right)=\cdots ={p}_{m}\left(x\right)=0.$
Then, the MQ problem is closely related to the attack that forges signatures for the MPKC.
Isomorphism of polynomials (IP) problem: Let $\mathcal{A}$ and $\mathcal{B}$ be two polynomial maps from ${\mathbb{F}}_{q}^{n}$ to ${\mathbb{F}}_{q}^{m}$. Find two invertible affine linear maps $\mathcal{S}$: ${\mathbb{F}}_{q}^{n}\to {\mathbb{F}}_{q}^{n}$ and $\mathcal{T}$: ${\mathbb{F}}_{q}^{m}\to {\mathbb{F}}_{q}^{m}$ such that $\mathcal{B}=\mathcal{T}\circ \mathcal{A}\circ \mathcal{S}$.
Then, the IP problem is closely related to the attack for finding secret keys for the MPKC.
One of the most wellknown publickey cryptosystems based on multivariate polynomials over a finite field was proposed by Matsumoto and Imai [
12]. Patarin [
13] later demonstrated that the Matsumoto–Imai cryptosystem is insecure, proposed the hidden field equation (HFE) publickey cryptosystem by repairing their cryptosystems [
14] and designed the Oil and Vinegar (OV) scheme [
15]. There are several variations of the HFE and OV schemes, e.g., Kipnis et al. proposed the unbalanced OV (UOV) scheme [
16] as described in
Section 2.5.
Several MPKCs were proposed for the NIST PQC project [
17], e.g., G
eMSS [
18], LUOV [
19], MQDSS [
20], and Rainbow [
21] MPKCs. Ding et al. found a forgery attack on LUOV [
22], and Kales and Zaverucha found a forgery attack on MQDSS [
23]. At the end of the second round, NIST selected Rainbow as a thirdround finalist and moved G
eMSS to an alternate candidate [
24].
MinRank problem: Let
r be a positive integer and
k matrices
${M}_{1},\dots ,{M}_{k}\in {\mathbb{F}}_{q}^{m\times n}$. Find
${x}_{1},\dots ,{x}_{k}\in {\mathbb{F}}_{q}$ such that
$\left({x}_{1},\dots ,{x}_{k}\right)\ne 0$ and
The MinRank problem can be reduced to the MQ problem [
25,
26,
27]. By solving the MinRank problem, Tao et al. found a key recovery attack on G
eMSS [
28], and Beullens found a key recovery attack on Rainbow [
29,
30].
NIST reported that a thirdround finalist of the digital signature scheme, Rainbow, had the property that its signing and verification were efficient, and the signature size was very short [
31]. Beullens et al. also demonstrated that the (U)OV scheme performed comparably to the algorithms selected by NIST [
32].
As noted above, the security of an MPKC is highly dependent on the hardness of the MQ problem because multivariate polynomial equations are transformed into MQ polynomial equations by increasing the number of variables and equations. To break a cryptosystem, we translate its underlying algebraic structure into a system of multivariate polynomial equations. There are three wellknown algebraic approaches to solving the MQ problem: the extended linearization (XL) algorithm proposed by Courtois et al. [
33] and the F4 and F5 algorithms proposed by Faugère [
34,
35]. The XL algorithm is described in
Section 2.2. Gröbner bases algorithm is described in
Section 2.3 and
Section 3.1 and the F4 algorithm is described in
Section 3.2.
In addition to the theoretical evaluations of the computational complexity, practical evaluations are also crucial in the research of cryptography, e.g., such as many efforts addressing the RSA [
36], ECC [
37], and Lattice challenges [
38]. The Fukuoka MQ challenge project [
39,
40] was started in 2015 to evaluate the security of the MQ problem. In this project, the MQ problems were classified into encryption and digital signature schemes. Each scheme was then classified into three categories according to the number of quadratic equations (
m), the number of variables (
n), and the characteristic of the finite field. The encryption schemes were classified into types I to III, which correspond to the condition where
$m=2n$ over
${\mathbb{F}}_{2}$,
${\mathbb{F}}_{256}$, and
${\mathbb{F}}_{31}$, respectively. The digitalsignature schemes were classified into types IV to VI, which correspond to the condition where
$n\approx 1.5m$ over
${\mathbb{F}}_{2}$,
${\mathbb{F}}_{256}$, and
${\mathbb{F}}_{31}$, respectively. Up to the time of writing this paper, all the best records in the Fukuoka MQ challenge, except type IV, have been set by variant algorithms of both the XL and F4 algorithms. For example, the authors improved the F4style algorithm and set new records of both type II and III, as described below, but a variant of the XL algorithm later surpassed the record of type III.
The F4 algorithm proposed by Faugère is an improvement of Buchberger’s algorithm for computing Gröbner bases [
41,
42], as described in
Section 3.2. In Buchberger’s algorithm, it is fundamental to compute the Spolynomial of two polynomials, as described in
Section 3.1. The critical pair is defined by a set of data (two polynomials, the least common multiple (
$\mathrm{LCM}$) of their leading terms, and two associated monomials required to compute the Spolynomial, as described in
Section 2.3. The F4 algorithm computes many Spolynomials simultaneously using Gaussian elimination. A variant of the F4 algorithm involving these matrix operations is referred to as an F4style algorithm in this paper. There are several variants of the F4style algorithm. For example, Joux and Vitse [
43] designed an efficient variant algorithm to compute Gröbner bases for similar polynomial systems. Additionally, Makarim and Stevens [
44] proposed a variant M4GB algorithm that could reduce the leading and lower terms of a polynomial. Using the M4GB algorithm, they set the best record for the Fukuoka MQ challenge of type VI with up to 20 equations (
$m=20$), at the time of writing this paper.
Recently, Ito et al. [
45] also proposed a variant algorithm that could solve the Fukuoka MQ challenge for both types II and III, with up to 37 equations (
$m=37$), and set the best record of type II at the time of writing. In their paper, the following selection strategy for critical pairs was proposed: (a) a set of critical pairs is partitioned into smaller subsets
${C}_{1},\dots ,{C}_{k}$ such that
${C}_{i}=256$$(1\le i\le k1)$; (b) Gaussian elimination is performed for an associated Macaulay matrix composed of each subset; and (c) the remaining subsets are omitted if some Spolynomials are reduced to zero. Herein, we refer to the subdividing method as (a) and the removal method as (c). Their strategy was then validated only under the following two situations: systems of MQ polynomial equations associated with encryption schemes, i.e., the
$m=2n$ case and the
${C}_{i}=256$ case. Thus, in this paper, we propose several types of partitions related to the subdividing method and focus on their validity for solving systems of MQ polynomial equations associated with digitalsignature schemes, i.e., the
$n>m$ case. We evaluate the performance of the proposed methods for the
$m=n+1$ case only because we focus on evaluating the performance of the proposed methods. In other words, before executing the F4style algorithm combined with the proposed methods, we assume that random values or specific values have already been substituted for some
$nm+1$ variables in the system, according to the hybrid approach [
46].
Our contribution. In general, the size of a matrix affects the computational complexity of Gaussian elimination. Reducing the number of critical pairs is essential because they determine the size of the Macauley matrix. First, we propose three basic subdividing methods SD1, SD2, and SD3, which have different types of partitions for a set of critical pairs. Then, we integrate both the proposed subdividing methods and the removal method into the OpenF4 library [
47] and compare their software performance with that of the original library using similar settings to those of types V and VI of the Fukuoka MQ challenge, i.e.,
$(n,m)=(9,10),\dots ,(15,16)$ over
${\mathbb{F}}_{256}$ and
$(n,m)=(9,10),\dots ,(16,17)$ over
${\mathbb{F}}_{31}$. To validate the removal method, we then verify that neither a temporary base nor critical pair of a higher degree arises from unused critical pairs in omitted subsets. Here,
${D}_{\mathrm{max}}$ denotes the highest degree of critical pairs appearing in the Gröbner bases computation or the F4style algorithm. The process by which the degree of a critical pair reaches
${D}_{\mathrm{max}}$ for the first time is referred to as the first half, and the remaining process is referred to as the second half. Then, our experiments show that a combination of two different basic methods (i.e., SD3 followed by SD1) is faster than all other methods because of the difference between the first and second halves of the computation. Finally, our experiments show that the number of critical pairs that generate a reduction to zero for the first time is approximately constant under the condition where
$m=n+1$ in the sense that a similar number is obtained with a high probability. We also propose two derived subdividing methods (SD4 and SD5) for the first half. The experimental results show that SD4 followed by SD1 is the fastest method and SD5 is as fast as SD4, as long as
$n<16$ over
${\mathbb{F}}_{256}$ and
$n<17$ over
${\mathbb{F}}_{31}$ hold. Moreover, we propose an extrapolation outside the scope of the experiments for further research. Our findings make a unique contribution toward improving the security evaluation of MPKC.
Organization. The remainder of this paper is organized as follows. First, we introduce basic notations and preliminaries in
Section 2. Next, we present background to Gröbner bases computation in
Section 3. Then, we describe the proposed method in
Section 3.4. Afterward, we present the performance result of the proposed method in
Section 4. Finally, the paper is concluded in
Section 5.
2. Preliminaries
In the following, we define the notations and terminology used in this paper.
2.1. Notations
Let $\mathbb{N}$ be the set of all natural numbers, $\mathbb{Z}$ the set of all integers, ${\mathbb{Z}}_{\ge 0}$ the set of all nonnegative integers, and ${\mathbb{F}}_{q}$ a finite field with q elements. $\mathcal{R}$ denotes a polynomial ring of n variables over ${\mathbb{F}}_{q}$, i.e., $\mathcal{R}={\mathbb{F}}_{q}[{x}_{1},\dots ,{x}_{n}]={\mathbb{F}}_{q}\left[x\right]$.
A monomial ${x}^{a}$ is defined by a product ${x}_{1}^{{a}_{1}}\cdots {x}_{n}^{{a}_{n}}$, where the $a=({a}_{1},\dots ,{a}_{n})$ is an element of ${\mathbb{Z}}_{\ge 0}^{n}$. Furthermore, $\mathcal{M}$ denotes the set of all monomials in $\mathcal{R}$, i.e., $\mathcal{M}=\{{x}_{1}^{{a}_{1}}\cdots {x}_{n}^{{a}_{n}}\mid {a}_{1},\dots ,{a}_{n}\in {\mathbb{Z}}_{\ge 0}\}$. For $c\in {\mathbb{F}}_{q}$ and $u\in \mathcal{M}$, we call the product $cu$ a term and c the coefficient of u. $\mathcal{T}$ denotes the set of all terms, i.e., $\mathcal{T}=\{cu\mid c\in {\mathbb{F}}_{q},u\in \mathcal{M}\}$. For a polynomial, $f={\sum}_{i}{c}_{i}{u}_{i}\in \mathcal{R}$ for ${c}_{i}\in {\mathbb{F}}_{q}\backslash \left\{0\right\}$ and ${u}_{i}\in \mathcal{M}$, $\mathcal{T}\left(f\right)$ denotes the set $\left\{{c}_{i}{u}_{i}\right\}$, and $\mathcal{M}\left(f\right)$ denotes the set $\left\{{u}_{i}\right\}$.
The total degree of ${x}^{a}$ is defined by the sum ${a}_{1}+\cdots +{a}_{n}$, which is denoted by $deg\left({x}^{a}\right)$. The total degree of f is defined by $max\{deg(u)\mid u\in \mathcal{M}(f\left)\right\}$ and is denoted by $deg\left(f\right)$.
Definition 1. A total order ≺ on $\mathcal{M}$ is called a monomial order if the following conditions hold:
 (i)
s, t, $u\in \mathcal{M}$, t ⪯ s ⇒ $ut$ ⪯ $us$,
 (ii)
∀$t\in \mathcal{M}$, 1 ⪯ t.
Here, if $s\prec t$ or $s=t$ holds, then we denote $s\u2aaft$.
Definition 2. The degree reverse lexicographical order ≺ is defined by For example, in
$\mathcal{R}[{x}_{1},{x}_{2},{x}_{3}]$,
The degree reverse lexicographical order ≺ is fixed throughout this paper as a monomial order.
For a polynomial $f\in \mathcal{R}$, $\mathrm{LM}\left(f\right)$ denotes the leading monomial of f, i.e., $\mathrm{LM}\left(f\right)={max}_{\prec}\mathcal{M}\left(f\right)$, and $\mathrm{LT}\left(f\right)$ denotes the leading term of f, i.e., $\mathrm{LT}\left(f\right)={max}_{\prec}\mathcal{T}\left(f\right)$. In addition, $\mathrm{LC}\left(f\right)$ denotes the corresponding coefficient for $\mathrm{LT}\left(f\right)$. A polynomial f is called monic if $\mathrm{LC}\left(f\right)=1$.
For a subset $F\subset \mathcal{R}$, $\mathrm{LM}\left(F\right)$ denotes the set of leading monomials of polynomials $f\in F$, i.e., $\mathrm{LM}\left(F\right)=\left\{\mathrm{LM}\right(f)\mid f\in F\}$.
For two monomials ${x}^{a}={x}_{1}^{{a}_{1}}\dots {x}_{n}^{{a}_{n}}$ and ${x}^{b}={x}_{1}^{{b}_{1}}\dots {x}_{n}^{{b}_{n}}$ where $a=({a}_{1},\dots ,{a}_{n})$ and $b=({b}_{1},\dots ,{b}_{n})\in {\mathbb{Z}}_{\ge 0}^{n}$, their corresponding least common multiple (LCM) and the greatest common divisor (GCD) are defined as $\mathrm{LCM}({x}^{a},{x}^{b})={x}^{c}$ where $c=(max({a}_{1},{b}_{1}),\dots ,$$max({a}_{n},{b}_{n}))$, and $\mathrm{GCD}({x}^{a},{x}^{b})={x}^{c}$ where $c=(min({a}_{1},{b}_{1}),\dots ,min({a}_{n},{b}_{n}))$.
For two subsets A and B, $A\prec B$ is defined if $a\prec b$ holds for $a\in A$ and $b\in B$.
2.2. The XL Algorithm
Let D be the parameter of the XL algorithm. Let $\{{p}_{1},\dots ,{p}_{m}\}$ be a set of polynomials over a finite field ${\mathbb{F}}_{q}$. The XL algorithm executes the following steps:
Multiply: Generate all the products (${\prod}_{j=1}^{k}{x}_{{i}_{j}}$) ${p}_{i}$ with $k\le Ddeg\left({p}_{i}\right)$.
Multiply: Consider each monomial in ${x}_{i}$ of degree $\le D$ as a new variable and perform Gaussian elimination on the linear equation obtained in step 1. The ordering on the monomials must be such that all the terms containing one variables (say ${x}_{1}$) are eliminated last.
Solve: Assume that step 2 yields at least one univariate equation in the power of ${x}_{1}$. Solve this equation over ${\mathbb{F}}_{q}$.
Repeat: Simplify the equations and repeat the process to find the values of the other variables.
Step 1 is regarded as the construction of the Macaulay matrix with the ordering specified in step 2, as described in
Section 3.2. It is difficult to estimate the parameter
D in advance. The computational complexity of the XL algorithm is roughly
where
n is the number of variables and
$2<\omega \le 3$ is the linear algebra constant [
48].
2.3. Gröbner Bases
The concept of Gröbner bases was introduced by Buchberger [
49] in 1979. Computing Gröbner bases is a standard tool for solving simultaneous equations. This section presents the definitions and notations used in Gröbner bases. Methods to compute Gröbner bases are explained in
Section 3.
Here, $\langle G\rangle $ denotes an ideal generated by a subset $G\subset \mathcal{R}$. $G\subset I$ is called a basis of an ideal I if $I=\langle G\rangle $ holds. We refer to G as Gröbner bases of I if for all $f\in I$ there exists $g\in G$ such that $\mathrm{LM}\left(g\right)\left\phantom{\rule{4pt}{0ex}}\mathrm{LM}\right(f)$. To compute Gröbner bases, we need to compute polynomials called Spolynomials.
Here, let $f\in \mathcal{R}$ and $G\subset \mathcal{R}$. It is said that f is reducible by G if there exist $u\in \mathcal{M}\left(f\right)$ and $g\in G$ such that $\mathrm{LM}\left(g\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}u$. Thus, we can eliminate $cu$ from f by computing $f\frac{cu}{\mathrm{LT}\left(g\right)}g$, where c is the coefficient of u in f. In this case, g is said to be a reductor of u. If f is not reducible by G, then f is said to be a normal form of G. Repeatedly reducing f using a polynomial of G to obtain a normal form is referred to as normalization, and the function normalizing f using G is represented by $\mathrm{NF}(f,G)$.
For example, let $f={x}_{1}{x}_{2}+{x}_{3},\phantom{\rule{4pt}{0ex}}{g}_{1}={x}_{1}{x}_{3},\phantom{\rule{4pt}{0ex}}{g}_{2}={x}_{2}{x}_{3}+1\in {\mathbb{F}}_{q}[{x}_{1},{x}_{2},{x}_{3}]$ and $G=\{{g}_{1},{g}_{2}\}$. First, the term ${x}_{1}{x}_{2}$ in f is divisible by $\mathrm{LM}\left({g}_{1}\right)={x}_{1}$ and $f{x}_{2}{g}_{1}={f}_{1}$ is obtained. Next, the term ${x}_{2}{x}_{3}$ in ${f}_{1}$ is divisible by $\mathrm{LM}\left({g}_{2}\right)={x}_{2}{x}_{3}$ and ${f}_{1}{g}_{2}={x}_{3}1={f}_{2}$ is obtained. Finally, ${f}_{2}$ is the normal form of f by G since ${f}_{2}$ is not reducible by G.
A critical pair of two polynomials
$({g}_{1},{g}_{2})$ is defined by the tuple (
$\mathrm{LCM}(\mathrm{LCM}\left({g}_{1}\right),$ $\mathrm{LCM}\left({g}_{2}\right))$,
${t}_{1}$,
${g}_{1}$,
${t}_{2}$,
${g}_{2}$)
$\in \mathcal{R}\times \mathcal{M}\times \mathcal{R}\times \mathcal{M}\times \mathcal{R}$ such that
For example, let ${h}_{1}={x}_{1}{x}_{2}+{x}_{3},\phantom{\rule{4pt}{0ex}}{h}_{2}={x}_{2}{x}_{3}+1\in {\mathbb{F}}_{q}[{x}_{1},{x}_{2},{x}_{3}]$. $\mathrm{LM}\left({h}_{1}\right)={x}_{1}{x}_{2}$ and $\mathrm{LM}\left({h}_{2}\right)={x}_{2}{x}_{3}$. We have $\mathrm{LCM}(\mathrm{LM}\left({h}_{1}\right),\mathrm{LM}\left({h}_{2}\right))={x}_{1}{x}_{2}{x}_{3}$. Then, ${t}_{1}={x}_{3}$ and ${t}_{2}={x}_{1}$.
For a critical pair p of $({g}_{1},{g}_{2})$, $\mathrm{GCD}\left(p\right)$, $\mathrm{LCM}\left(p\right)$, and $deg\left(p\right)$ denote $\mathrm{GCD}\left(p\right)=\mathrm{GCD}(\mathrm{LM}\left({g}_{1}\right)$, $\mathrm{LM}\left({g}_{2}\right))$, $\mathrm{LCM}\left(p\right)=\mathrm{LCM}(\mathrm{LM}\left({g}_{1}\right)$, $\mathrm{LM}\left({g}_{2}\right))$, and $deg\left(p\right)=deg\left(\mathrm{LCM}\right(p\left)\right)$, respectively.
The Spolynomial,
$\mathrm{Spoly}\left(p\right)$ (or
$\mathrm{Spoly}({g}_{1},{g}_{2})$), of a critical pair
p of
$({g}_{1},{g}_{2})$ is defined as follows:
$\mathrm{Left}\left(p\right)$ and $\mathrm{Right}\left(p\right)$ denote $\mathrm{Left}\left(p\right)={v}_{1}{g}_{1}$ and $\mathrm{Right}\left(p\right)={v}_{2}{g}_{2}$, respectively.
2.4. MQ Problem
Let
F be a subset
$\{{f}_{1},\dots ,{f}_{m}\}\subset \mathcal{R}$, and let
${f}_{j}\in F$ be a quadratic polynomial (i.e.,
$deg\left({f}_{j}\right)=2$). The MQ problem is to compute a common zero
$({x}_{1},\dots ,{x}_{n})\in {\mathbb{F}}_{q}^{n}$ for a system of quadratic polynomial equations defined by
F, i.e.,
The MQ problem is discussed frequently in terms of MPKCs because representative MPKCs, e.g., UOV, Rainbow, and GeMSS, use quadratic polynomials. These schemes are signature schemes and employ a system of MQ polynomial equations under the condition where $n>m$.
The computation of Gröbner bases is a fundamental tool for solving the MQ problem. If $n<m$, the system of F tends to have no solution or exactly one solution. If the system of F has no solution, $\langle 1\rangle $ can be obtained as a Gröbner basis of $\langle F\rangle $. If it has a solution, $\alpha =({\alpha}_{1},\dots ,{\alpha}_{n})\in {\mathbb{F}}_{q}^{n}$, $\langle {x}_{1}{\alpha}_{1},\dots ,{x}_{n}{\alpha}_{n}\rangle $ can be obtained as Gröbner bases of $\langle F\rangle $. Thus, it is easy to obtain the solution of the system of F from the Gröbner bases of $\langle F\rangle $.
If
$n>m$, it is generally necessary to compute Gröbner bases concerning lexicographic order using a Gröbner basis conversion algorithm, e.g., FGLM [
50]. Another method is to convert the system associated with
F to a system of multivariate polynomial equations by substituting random values for some variables and then computing its Gröbner bases. The process is repeated with other random values if there is no solution. This method is called the hybrid approach and typically substitutes random values for
$nm+1$ variables. Hence, it is important to solve the MQ problem with
$m=n+1$.
2.5. The (Unbalanced) Oil and Vinegar Signature Scheme
Let $K={\mathbb{F}}_{q}$ be a finite field. Let o, $v\in \mathbb{N}$, $m=o$, and $n=o+v$. For a message $y=({y}_{1},\dots ,{y}_{m})\in {K}^{m}$ to be signed, we define a signature $x=({x}_{1},\dots ,{x}_{n})\in {K}^{n}$ of y as follows.
2.5.1. Key Generation
The secret key consists of two parts:
The public key is the following m quadratic equations:
 (i)
Let $A=({a}_{1},\dots ,{a}_{o},{a}_{1}^{\prime},\dots ,{a}_{v}^{\prime})\in {K}^{n}$.
 (ii)
Compute $x={s}^{1}\left(A\right)$.
 (iii)
We have
m quadratic equations in
n variables:
2.5.2. Signature Generation
 (i)
We generate
${a}_{1},\dots ,{a}_{o},{a}_{1}^{\prime},\dots ,{a}_{v}^{\prime}\in K$ such that (
1) holds.
 (ii)
Compute $x={s}^{1}\left(A\right)$ where $A=({a}_{1},\dots ,{a}_{o},{a}_{1}^{\prime},\dots ,{a}_{v}^{\prime})$.
2.5.3. Signature Verification
If (
2) is satisfied, then we find a signature
x of
y valid.
If (
2) is solved, then we find another solution
${x}^{\prime}=({x}_{1}^{\prime},\dots ,{x}_{n}^{\prime})$. Thus, we can find another signature
${x}^{\prime}$ of
y. Therefore, the difficulty of forging signatures can be related to the difficulty of the MQ problem.
3. Materials and Methods
In this section, we introduce three algorithms to compute Gröbner bases: the Buchbergerstyle algorithm, the $F4$ algorithm proposed by Faugère, and the $F4$style algorithm proposed by Ito et al., which is the primary focus of this paper.
3.1. BuchbergerStyle Algorithm
In 1979, Buchberger introduced the concept of Gröbner bases and proposed an algorithm to compute them. He found that Gröbner bases can be computed by repeatedly generating Spolynomials and reducing them. Algorithm 1 describes the Buchbergerstyle algorithm to compute Gröbner bases. First, we generate a polynomial set
G and a set of critical pairs
P from the input polynomials
F. We then repeat the following steps until
P is empty: one critical pair
p from
P is selected, an Spolynomial
s is generated,
s is reduced to the polynomial
h by
G, and
G and
P are updated from
$(G,P,h)$ if
h is a nonzero polynomial. The Update function (Algorithm 2) is frequently used to update
G and
P, omitting some redundant critical pairs [
51]. If a polynomial
h is reduced to zero, then
G and
P are not updated; thus, the critical pair that generates an Spolynomial to be reduced to zero is redundant. Here, the critical pair selection method that selects the pair with the lowest
$\mathrm{LCM}$ (referred to as the normal strategy) is frequently employed. If the degree reverse lexicographic order is used as a monomial order, then the critical pair with the lowest degree is naturally selected under the normal strategy.
Algorithm 1 Buchbergerstyle algorithm 
Input:$F=\{{f}_{1},\dots ,{f}_{m}\}\subset \mathcal{R}$. Output: A Gröbner bases of $\langle F\rangle $.
 1:
$(G,P)\leftarrow (\varnothing ,\varnothing ),\phantom{\rule{0.166667em}{0ex}}i\leftarrow 0$  2:
for$f\in F$do  3:
$(G,P)\leftarrow $ Update $(G,P,f)$  4:
end for  5:
while$P\ne \varnothing $do  6:
$i\leftarrow i+1$  7:
${p}_{i}\leftarrow $ an element of P  8:
$P\leftarrow P\backslash \left\{{p}_{i}\right\}$  9:
${s}_{i}\leftarrow \mathrm{Spoly}\left({p}_{i}\right)$  10:
${h}_{i}\leftarrow \mathrm{NF}({s}_{i},G)$  11:
if ${h}_{i}\ne 0$ then  12:
$(G,P)\leftarrow $ Update $(G,P,{h}_{i})$  13:
end if  14:
end while  15:
returnG

Algorithm 2 Update 
Input:$G\subset \mathcal{R}$, P is a set of critical pairs, and $h\in \mathcal{R}$. Output: ${G}_{\mathrm{new}}$ and ${P}_{\mathrm{new}}$.
 1:
$h\leftarrow \frac{h}{\mathrm{LC}\left(h\right)}$  2:
$C\leftarrow \left\{\right(h,g)\mid g\in G\}$, $D\leftarrow \varnothing $  3:
while$C\ne \varnothing $do  4:
$p\leftarrow $ an element of C, $C\leftarrow C\backslash \left\{p\right\}$  5:
if $\mathrm{GCD}\left(p\right)=1$ or ${}^{\forall}{p}^{\prime}\in C\cup D,\mathrm{LCM}\left({p}^{\prime}\right)\nmid \mathrm{LCM}\left(p\right)$ then  6:
$D\leftarrow D\cup \left\{p\right\}$  7:
end if  8:
end while  9:
${P}_{\mathrm{new}}\leftarrow \{p\in D\mid \mathrm{GCD}\left(p\right)\ne 1\}$  10:
for$p=({g}_{1},{g}_{2})\in P$do  11:
if $\mathrm{LM}\left(h\right)\nmid \mathrm{LCM}\left(p\right)$ or  12:
$\mathrm{LCM}(\mathrm{LM}\left(h\right),\mathrm{LM}\left({g}_{1}\right))=\mathrm{LCM}\left(p\right)$ or  13:
$\mathrm{LCM}(\mathrm{LM}\left(h\right),\mathrm{LM}\left({g}_{2}\right))=\mathrm{LCM}\left(p\right)$ then  14:
${P}_{\mathrm{new}}\leftarrow {P}_{\mathrm{new}}\cup \left\{p\right\}$  15:
end if  16:
end for  17:
${G}_{\mathrm{new}}\leftarrow \{g\in G\mid \mathrm{LM}\left(h\right)\nmid \mathrm{LM}\left(g\right)\}\cup \left\{h\right\}$  18:
return$({G}_{\mathrm{new}},{P}_{\mathrm{new}})$

3.2. $F4$Style Algorithm
The F4 algorithm, which is a representative algorithm for computing Gröbner bases, was proposed by Faugère in 1999, and it reduces Spolynomials simultaneously. Herein, we present an F4style algorithm with this feature.
Here, let
G be a subset of
$\mathcal{R}$. A matrix in which the coefficients of polynomials in
G are represented as corresponding to their monomials is referred to as a Macaulay matrix of
G.
G is said to be a row echelon form if
$\mathrm{LC}\left({g}_{1}\right)=1$ and
$\mathrm{LM}\left({g}_{1}\right)\ne \mathrm{LM}\left({g}_{2}\right)$ for all
${g}_{1}\ne {g}_{2}\in G$. The F4style algorithm reduces polynomials by computing row echelon forms of Macaulay matrices. For example, let
$f={x}_{1}{x}_{2}+{x}_{3},{g}_{1}={x}_{1}{z}_{3},{g}_{2}={x}_{2}{x}_{3}+1\in {\mathbb{F}}_{q}[{x}_{1},{x}_{2},{x}_{3}]$ as in the fourth paragraph of
Section 2.3. We use
${x}_{2}{g}_{1}$ and
${g}_{2}$ to compute
$\mathrm{NF}(f,\{{g}_{1},{g}_{2}\})={x}_{3}1$. The Macaulay matrix
M of
$\{f,{g}_{1},{g}_{2}\}$ is given as follows:
In addition, a row echelon form
$\tilde{M}$ of
M is given as follows:
We can obtain ${x}_{3}1$ from $\tilde{M}$.
The F4style algorithm is described in Algorithm 3. The main process is described in lines 5 to 14, where some critical pairs are selected using the Select function (Algorithm 4), and the polynomials of the pairs are reduced using the Reduction function (Algorithm 5). The Select function selects critical pairs with the lowest degree on the basis of the normal strategy. In particular, the F4style algorithm selects all critical pairs with the lowest degree. It takes the subset
${P}_{d}$ of
P and integer
d so that
$d=min\{deg(\mathrm{LCM}\left(p\right))\mid p\in P\}$ and
${P}_{d}=\{p\in P\mid deg\left(p\right)=d\}$. The Reduction function collects reductors to reduce the polynomials and computes the row echelon form of the polynomial set. In addition, the Simplify function (Algorithm 6) determines the reductor with the lowest degree from the polynomial set obtained during the computation of the Gröbner bases.
Algorithm 3 F4style algorithm 
Input:$F=\{{f}_{1},\dots ,{f}_{m}\}\subset \mathcal{R}$. Output: A Gröbner basis of $\langle F\rangle $.
 1:
$(G,P)\leftarrow (\varnothing ,\varnothing ),\phantom{\rule{0.166667em}{0ex}}i\leftarrow 0$  2:
for$f\in F$do  3:
$(G,P)\leftarrow $ Update $(G,P,f)$  4:
end for  5:
while$P\ne \varnothing $do  6:
$i\leftarrow i+1$  7:
${P}_{d},\phantom{\rule{0.166667em}{0ex}}d\leftarrow $ Select $\left(P\right)$  8:
$P\leftarrow P\backslash {P}_{d}$  9:
$L\leftarrow \{\mathrm{Left}\left(p\right)\mid p\in {P}_{d}\}\cup \{\mathrm{Right}\left(p\right)\mid p\in {P}_{d}\}$  10:
$({\tilde{{H}_{i}}}^{+},{H}_{i})\leftarrow \mathrm{Reduction}(L,G,{\left({H}_{j}\right)}_{j=1,\dots ,i1})$  11:
for $h\in {\tilde{{H}_{i}}}^{+}$ do  12:
$(G,P)\leftarrow $ Update $(G,P,h)$  13:
end for  14:
end while  15:
returnG

Algorithm 4 Select 
Input:$P\subset \mathcal{R}\times \mathcal{R}$. Output:: ${P}_{d}\subset P$ and $d\in \mathbb{N}$.
 1:
$d=min\{deg(\mathrm{LCM}\left(p\right))\mid p\in P\}$  2:
${P}_{d}=\{p\in P\mid deg\left(p\right)=d\}$  3:
return$({P}_{d},d)$

Algorithm 5 Reduction 
Input:$L\subset \mathcal{M}\times \mathcal{R},G\subset \mathcal{R}$, and $\mathcal{H}={\left({H}_{j}\right)}_{j=1,\dots ,i1}$, where ${H}_{j}\subset \mathcal{R}$. Output:${\tilde{H}}^{+}$ and $H\subset \mathcal{R}$.
 1:
${L}^{\prime}\leftarrow \{\mathrm{Simplify}(t,f,\mathcal{H})\mid (t,f)\in L\}$  2:
$H\leftarrow \{t\ast f\mid (t,f)\in {L}^{\prime}\}$  3:
Done $=\mathrm{LM}\left(H\right)$  4:
while Done $\ne \mathcal{M}\left(H\right)$ do  5:
$u\leftarrow $ an element of $\mathcal{M}\left(H\right)\backslash $ Done  6:
Done ← Done $\cup \left\{u\right\}$  7:
if ${}^{\exists}{g}_{1}\in G$ s.t. $\mathrm{LM}\left({g}_{1}\right)$ divides u then  8:
${u}_{1}\leftarrow \frac{u}{\mathrm{LM}\left({g}_{1}\right)}$  9:
$({u}_{2},{g}_{2})\leftarrow \mathrm{Simplify}({u}_{1},{g}_{1},\mathcal{H})$  10:
$H\leftarrow H\cup \left\{{u}_{2}{g}_{2}\right\}$  11:
end if  12:
end while  13:
$\tilde{H}\leftarrow $ row echelon form of H  14:
${\tilde{H}}^{+}\leftarrow \{h\in \tilde{H}\mid \mathrm{LM}\left(h\right)\notin \mathrm{LM}\left(H\right)\}$  15:
return$({\tilde{H}}^{+},H)$

Algorithm 6 Simplify 
Input:$u\in \mathcal{M},f\in \mathcal{R}$, and $\mathcal{H}={\left({H}_{j}\right)}_{j=1,\dots ,i1}$, where ${H}_{j}\subset \mathcal{R}$. Output:$({u}_{\mathrm{new}},{f}_{\mathrm{new}})\in \mathcal{M}\times \mathcal{R}$.
 1:
for$t\in $ list of divisors of u do  2:
if ${}^{\exists}j$ s.t. $tf\in {H}_{j}$ then  3:
$\tilde{{H}_{j}}\leftarrow $ row echelon form of ${H}_{j}$  4:
$h\leftarrow $ an element of $\tilde{{H}_{j}}$ s.t. $\mathrm{LM}\left(h\right)=\mathrm{LM}\left(tf\right)$  5:
if $u\ne t$ then  6:
return$\mathrm{Simplify}(\frac{u}{t},h,\mathcal{H})$  7:
else  8:
return$(1,h)$  9:
end if  10:
end if  11:
end for  12:
return$(u,f)$

The computational complexity of the F4style algorithm can be evaluated from above by the same order of magnitude as that of Gaussian elimination of the Macaulay matrix. The size of the Macaulay matrix of degree
D is bounded above by the number of monomials of degree
$\le D$, which is equal to
The computational complexity of Gaussian elimination is bounded above by
${N}^{\omega}$ if the matrix size is
N (
$2<\omega \le 3$ is the linear algebra constant).
${D}_{\mathrm{max}}$ denotes the highest degree of critical pairs appearing in the Gröbner bases computation. Then, the computational complexity of the F4style algorithm is roughly
We can reduce the computational complexity by omitting redundant critical pairs.
3.3. The Algorithm Proposed by Ito et al.
Redundant critical pairs do not necessarily vanish after applying the Update function. Here, we introduce a method to omit many redundant pairs. We assume that the degree reverse lexicographic order is employed as a monomial order, and the normal strategy is used as the pair selection strategy in the Gröbner bases computation. When solving the MQ problem in the Gröbner bases computation, in many cases, the degree
d of the critical pairs changes, as described below.
Herein, the computation until the degree of the selected pair becomes ${D}_{\mathrm{max}}$ is referred to as the first half. In the first half of the computation, many redundant pairs are reduced to zero. When solving the MQ problem, Ito et al. found that if a critical pair of degree d is reduced to zero, all pairs of degree d stored at that time are also reduced to zero with a high probability. Thus, redundant critical pairs can be efficiently eliminated by ignoring all stored pairs of degree d after the critical pairs of degree d are reduced to zero. Algorithm 7 introduces the above method into Algorithm 3. In Algorithm 7, ${P}_{d}$ is the set of pairs with the lowest degree d that are not tested. The subset ${P}^{\prime}$ contains critical pairs selected from ${P}_{d}$, and ${H}^{+}$ refers to new polynomials obtained by reducing ${P}^{\prime}$. If the number of new polynomials ${H}^{+}$ is less than the number of selected pairs ${P}^{\prime}$, a reduction to zero has occurred, and then ${P}_{d}$ is deleted.
Algorithm 7 F4style algorithm proposed by Ito et al. 
Input:$F=\{{f}_{1},\dots ,{f}_{m}\}\subset \mathcal{R}$. Output: A Gröbner basis of $\langle F\rangle $.
 1:
$(G,P)\leftarrow (\varnothing ,\varnothing ),\phantom{\rule{0.166667em}{0ex}}i\leftarrow 0,\phantom{\rule{0.166667em}{0ex}}{D}_{\mathrm{max}}\leftarrow 0$  2:
for$f\in F$do  3:
$(G,P)\leftarrow $ Update $(G,P,f)$  4:
end for  5:
while$P\ne \varnothing $do  6:
$({P}_{d},\phantom{\rule{0.166667em}{0ex}}d)\leftarrow $ Select $\left(P\right)$  7:
if ${D}_{\mathrm{max}}<d$ then  8:
${D}_{\mathrm{max}}\leftarrow d$  9:
end if  10:
while ${P}_{d}\ne \varnothing $ do  11:
$i\leftarrow i+1$  12:
${P}^{\prime}\leftarrow $ a subset of ${P}_{d}$  13:
$(P,{P}_{d})\leftarrow (P\backslash {P}^{\prime},{P}_{d}\backslash {P}^{\prime})$  14:
$L\leftarrow \{\mathrm{Left}\left(p\right)\mid p\in {P}^{\prime}\}\cup \{\mathrm{Right}\left(p\right)\mid p\in {P}^{\prime}\}$  15:
$({\tilde{{H}_{i}}}^{+},{H}_{i})\leftarrow \mathrm{Reduction}(L,G,{\left({H}_{j}\right)}_{j=1,\dots ,i1})$  16:
for $h\in {\tilde{{H}_{i}}}^{+}\backslash \left\{0\right\}$ do  17:
$(G,P)\leftarrow $ Update $(G,P,h)$  18:
end for  19:
if ${}^{\exists}h\in {\tilde{{H}_{i}}}^{+}\backslash \left\{0\right\}$ s.t. $deg\left(h\right)<d$ then  20:
break  21:
end if  22:
if ${\tilde{{H}_{i}}}^{+}<{P}^{\prime}$ and ${D}_{\mathrm{max}}=d$ then  23:
$(P,{P}_{d})\leftarrow (P\backslash {P}_{d},\varnothing )$  24:
end if  25:
end while  26:
end while  27:
returnG

Note that Ito et al. stated that the proposed method was valid for MQ problems associated with encryption schemes, i.e., of type $m=2n$, but other MQ problems, including those of type $m=n+1$, were not discussed. Moreover, they set the number of selected pairs ${P}^{\prime}$ to 256 to divide ${P}_{d}$. Hence, they did not guarantee that this subdividing method is optimal.
3.4. Proposed Methods
We explain the subdividing methods and the removal method in
Section 3.4.1 and
Section 3.4.2, respectively. The proposed methods were integrated into the F4style algorithm as described in Algorithm 8. The OpenF4 library was used for these implementations. The OpenF4 library is an opensource implementation of the F4style algorithm and, thus, is suitable for this purpose.
Algorithm 8 F4style algorithm integrating the proposed methods 
Input:$F=\{{f}_{1},\dots ,{f}_{m}\}\subset \mathcal{R}$. Output: A basis of $\langle F\rangle $.
 1:
$(G,P)\leftarrow (\varnothing ,\varnothing ),\phantom{\rule{0.166667em}{0ex}}i\leftarrow 0$  2:
for$h\in F$do  3:
$(G,P)\leftarrow $ Update $(G,P,h)$  4:
end for  5:
while$P\ne \varnothing $
do  6:
$({P}_{d},d)\leftarrow $ Select $\left(P\right)$  7:
$P\leftarrow P\backslash {P}_{d}$  8:
while ${P}_{d}\ne \varnothing $ do  9:
 10:
$\{{C}_{1},\dots ,{C}_{k}\}\leftarrow $ SubDividePd $\left({P}_{d}\right)$  11:
for $\phantom{\rule{0.166667em}{0ex}}l\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}1\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\mathbf{to}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}k$ do  12:
$i\leftarrow i+1$  13:
${P}_{d}\leftarrow {P}_{d}\backslash {C}_{l}$  14:
$L\leftarrow \{\mathrm{Left}\left({p}^{\prime}\right)\mid {p}^{\prime}\in {C}_{l}\}\cup \{\mathrm{Right}\left({p}^{\prime}\right)\mid {p}^{\prime}\in {C}_{l}\}$  15:
$({\tilde{{H}_{i}}}^{+},{H}_{i})\leftarrow \mathrm{Reduction}(L,G,{\left({H}_{j}\right)}_{j=1,\dots ,i1})$  16:
for $h\in {\tilde{{H}_{i}}}^{+}\backslash \left\{0\right\}$ do  17:
$(G,P)\leftarrow $ Update $(G,P,h)$  18:
end for  19:
 20:
if $0\in {\tilde{{H}_{i}}}^{+}$ then  21:
${P}_{d}\leftarrow \varnothing $  22:
break  23:
end if  24:
end for  25:
end while  26:
end while  27:
returnG

Algorithm 9 SubDividePd 
Input:${P}_{d}\subset P$ and $d\in \mathbb{N}$. Output: ${C}_{1},\dots ,{C}_{k}\subset {P}_{d}$ 1:
${P}_{d}={C}_{1}\bigsqcup {C}_{2}\bigsqcup \cdots \bigsqcup {C}_{k}$ (disjoint union) s.t. ${C}_{i}\prec {C}_{j}$ for $i<j$  2:
return$\{{C}_{1},\dots ,{C}_{k}\}$

As mentioned above, the SelectPd function serves to select a subset ${P}_{d}$ of all the critical pairs at each step for the reduction part of the F4style algorithm. Ito et al. proposed a method where they subdivide ${P}_{d}$ into smaller subsets $\{{C}_{1},\dots ,{C}_{k}\}$ as described in Algorithm 9, and perform the Reduction and Update functions for each set ${C}_{i}$ consecutively when no Spolynomials are reduced to zero during the reduction. On the other hand, if some Spolynomials reduce to zero during the reduction of a set ${C}_{j}$ for the first time, this method ignores the remaining sets $\{{C}_{j+1},\dots ,{C}_{k}\}$ and removes them from all the critical pairs.
The authors confirmed that their method was effective in solving the MQ problems under the condition where $m=2n$ and $k=256$ only and they did not mention other types, especially $m=n+1$, or other subdividing methods.
In our experiments, as described in
Section 4.1, we generated the MQ problems (
$m=n+1$) with random polynomial coefficients to have at least one solution in the same manner as the Fukuoka MQ challenges ([
40], Algorithm 2 and Step 4 of Algorithm 1), and we assumed that
$\mathrm{LC}\left({f}_{j}\right)\ne 0$ for all input polynomials
${f}_{j}$$(j=1,\dots ,m)$ because such polynomials are obtained with nonnegligible probability for experimental purposes. Taking a change in variables into account, the probability is exactly
$1{\{1{(11/q)}^{m}\}}^{n}$. For example, it is close to 1 for
$q=31$ and
$(n,m)=(16,17)$.
3.4.1. Subdividing Methods
To solve the MQ problems, Ito et al. fixed the number of elements of each ${C}_{i}$ to 256, i.e., ${C}_{i}=256$. In our experiments, we propose three types of subdividing methods:
 SD1:
The number of elements in ${C}_{i}$ ($i<k$) is fixed except ${C}_{k}$.
We set ${C}_{i}=128$, 256, 512, 768, 1024, 2048, and 4096.
 SD2:
The number of subdivided subsets is fixed.
We set $k=5$, 10, and 15.
 SD3:
The fraction of elements to be processed in the remaining element in ${P}_{d}$ is fixed; i.e., $\mid {C}_{1}\mid =max(\lfloor r\mid {P}_{d}\mid \rfloor ,1)$ and $\mid {C}_{i}\mid =max(\lfloor r\mid {P}_{d}\backslash {\cup}_{l=1}^{i1}{C}_{l}\mid \rfloor ,1)$ for $i>1$.
We set $r=1/5$, $1/10$, and $1/15$.
Furthermore, we propose two subdividing methods based on SD1 in
Section 4.2.
3.4.2. A Removal Method
It is important to skip redundant critical pairs in the $F4$style algorithm because it takes extra time to compute reductions of larger matrix sizes. To solve the MQ problems that are defined as systems of m quadratic polynomial equations over n variables, Ito et al. experimentally confirmed that once a reduction to zero occurs for some critical pairs in ${P}^{\prime}\subset P$, nothing but a reduction to zero will be generated for all subsequently selected critical pairs in P in the case of $\mathcal{R}={\mathbb{F}}_{256}$ or ${\mathbb{F}}_{31}$ with the number of polynomials $m=2n$ and the number of variables $n=16,\dots ,25$.
We checked Hypothesis 1 through computational experiments.
Hypothesis 1. If a Macaulay matrix composed of critical pairs ${p}^{\prime}\in {P}^{\prime}$$(\subset P)$ has some reductions to zero, i.e., $0\in \tilde{H}$ in line 15 in Algorithm 8 with the normal strategy, then all remaining critical pairs $p\in P$ s.t. $deg\left(\mathrm{LCM}\left(p\right)\right)=deg\left(\mathrm{LCM}\left({p}^{\prime}\right)\right)$ will be reduced to zero with a high probability.
The difference between a measuring algorithm and a checking algorithm is as follows: in the algorithm measuring the software performance of the OpenF4 library and our methods, as defined in Algorithm 8, once a reduction to zero occurs, the remaining critical pairs in ${P}_{d}$ are removed. In other words, in such an algorithm, a new next ${P}_{d}$ is selected immediately after a reduction to zero. On the other hand, in the algorithm checking Hypothesis 1 as described above, we need to continue reducing all remaining critical pairs and monitor whether reductions to zero are consecutively generated after the first one. However, because the behavior of the checking algorithm needs to match that of the measuring one, every internal state just before processing the remaining critical pairs in the checking algorithm is reset to the state immediately after a reduction to zero.
To check Hypothesis 1, we solved the MQ problems of random coefficients over ${\mathbb{F}}_{31}$ for the condition where $m=n+1$ and $n=9,\dots ,16$ using Algorithm 8 with SD1. Due to processing times, a hundred samples and fifty samples were generated for each problem for $n<16$ and $n=16$, respectively. Furthermore, $\mid {C}_{i}\mid $ of SD1 was fixed to 1, 16, 32, 256, and 512 for $n=9,\dots ,12$; $n=13$; $n=14$; $n=15$; and $n=16$, respectively.
Our programs were terminated normally with about $0.9$ probability. Thus, the experiments showed that Hypothesis 1 was valid with about $0.9$ probability. The remaining events, which coincided with a probability of approximately 0.1, corresponded to an OpenF4 library’s warning concerning the number of temporary basis. Although the warning was output, neither temporary basis (i.e., an element in G, in line 17 of Algorithm 8) nor a critical pair of higher degree arose from unused critical pairs in omitted subsets. Moreover, all outputs of all problems contained the initial values with no errors.
3.5. System Architecture
Our experiments were performed on the following systems as shown in
Table 3 and
Table 4. In the case of
$(n,m)=(17,18)$ and
$(18,19)$ in
Appendix A1, our experiments were performed as shown in
Table 4.
Our software diagram was as shown in
Figure 1.