1. Introduction
Transport Layer Security (TLS) is the primary protocol for securing internet communications, providing authentication, confidentiality, and integrity through public-key cryptography. While current deployments predominantly rely on RSA and elliptic-curve cryptography (ECC), other public-key paradigms such as code-based, lattice-based, and multivariate cryptosystems have also been proposed and studied in the literature [
1]. Among these schemes, RSA and ECC in particular will be rendered insecure once large-scale quantum computers become practical [
2]. This vulnerability arises from Shor’s algorithm [
3], which enables efficient factorization of large integers and computation of discrete logarithms, directly compromising RSA and ECC, and Grover’s algorithm [
4], which accelerates exhaustive key search attacks and effectively reduces the security level of symmetric-key systems such as AES.
To address these threats, the U.S. National Institute of Standards and Technology (NIST) finalized four post-quantum cryptography (PQC) standards in June 2022: CRYSTALS-KYBER [
5], CRYSTALS-DILITHIUM [
6], FALCON [
7], and SPHINCS+ [
8]. In 2024, NIST added HQC [
9] as an additional standardized key-encapsulation mechanism (KEM) and initiated the
Additional Signature Competition to expand its portfolio of quantum-resistant digital signatures [
10].
To address this emerging risk, the Republic of Korea has standardized a suite of post-quantum cryptography (PQC) algorithms collectively referred to as Korean post-quantum cryptography (KpqC) [
11]. This set includes two signature schemes, HAETAE [
12] and AIMer [
13], and two KEMs, SMAUG-T [
14] and NTRU+ [
15], selected through domestic evaluation to meet national security requirements. Conducting a comprehensive performance assessment of KpqC in real-world TLS/X.509 environments has emerged as a critical step for enhancing both the applicability and trustworthiness of the standard, directly informing feasibility and operational cost evaluations in latency-sensitive and resource-constrained settings.
Migrating from classical cryptography to PQC within X.509-based Public Key Infrastructure (PKI) introduces two primary sources of performance overhead.
Static overhead stems from significantly larger public keys and signatures, which increase certificate sizes, while
dynamic overhead results from the higher computational costs of PQC primitives, affecting TLS handshake latency [
16,
17,
18,
19]. These costs are interdependent: the choice of signature scheme directly influences certificate size, and the selected KEM determines the runtime latency. Therefore, a unified evaluation considering both aspects is necessary to accurately characterize PQC migration trade-offs.
Although prior studies have benchmarked NIST-standardized PQC algorithms, deployment-level performance of Korean PQC (KpqC) schemes in production grade TLS/X.509 settings remains unexplored. This absence of side-by-side comparisons against both NIST PQC and ECC limits the availability of practical guidance for migration planning.
To close this gap, we present the first systematic evaluation of standardized KpqC algorithms—HAETAE, AIMer, SMAUG-T, and NTRU+—within the TLS/X.509 framework, implemented via the Open Quantum Safe (OQS) ecosystem. We integrated these schemes into OQS-enabled OpenSSL, enabling controlled comparisons against NIST PQC and ECC under computation-bound (localhost) and network-bound (LAN) conditions, including embedded-device and hybrid TLS configurations.
The remainder of this paper is organized as follows.
Section 2 reviews background,
Section 3 describes our integration and experimental setup,
Section 4 presents results and analysis, and
Section 5 concludes with future research directions.
Contributions
This paper makes the following contributions:
First publicly available integration of KpqC within a production-grade TLS stack. We integrate the standardized Korean post-quantum cryptography (KpqC) schemes HAETAE, AIMer, SMAUG-T, and NTRU+ into the widely deployed OpenSSL library via the Open Quantum Safe (OQS) framework. This implementation supports full TLS 1.3 handshakes, X.509 certificate issuance and validation, and hybrid certificate configurations, enabling reproducible experiments without altering the TLS protocol flow. The source code is publicly released at
https://github.com/minjoo97/liboqs_KpqC and
https://github.com/minjoo97/oqs-provider_KpqC (accessed on 16 September 2025.), lowering the barrier for independent benchmarking and pilot deployments.
Comprehensive and controlled benchmarking of hybrid X.509 certificates. We conduct a unified, apples-to-apples performance evaluation that measures both static overhead (certificate size) and dynamic overhead (TLS 1.3 handshake latency). The benchmarking covers computation-bound (localhost) and network-bound (LAN) environments, as well as embedded-device scenarios, directly comparing KpqC with NIST PQC standards and classical ECC under identical cryptographic and network conditions.
Evidence-based deployment guidelines for PQC migration. Based on empirical results, we analyze the trade-offs among certificate size, handshake latency, and security level. Our findings show that the substantial computational overhead observed in isolated environments is markedly reduced under realistic network conditions. We present practical migration strategies, including hybrid certificate adoption and algorithm selection frameworks, to balance security objectives with operational performance requirements.
3. Proposed Method
This study quantitatively evaluates the static (certificate size) and dynamic (TLS 1.3 handshake latency) overheads introduced by deploying Korean post-quantum cryptography (KpqC) algorithms into production-grade PKI systems. Our methodology comprised two primary phases: (i) extending the Open Quantum Safe (OQS) framework to incorporate four standardized KpqC schemes, and (ii) conducting controlled benchmarking under diverse deployment conditions. The overall workflow is illustrated in
Figure 2.
3.1. Extending the OQS Framework with KpqC Algorithms
To facilitate deployment-level evaluation, we extended the OQS framework by incorporating SMAUG-T, NTRU+, HAETAE, and AIMer into liboqs and the oqs-provider for OpenSSL 3.x. Rather than a direct code transplant, the process followed three design principles: (i) algorithmic fidelity, (ii) API interoperability, and (iii) performance-conscious implementation. This ensured that KpqC schemes became fully interoperable within the OQS ecosystem and directly comparable to NIST PQC algorithms under identical conditions.
API Uniformity: Each algorithm was encapsulated with thin wrappers adhering to the standardized liboqs interface, enabling transparent invocation by OQS-compatible applications. Signature schemes implemented the full API set (OQS_SIG_keypair, OQS_SIG_sign, OQS_SIG_verify), using the constant-time memory cleansing function (OQS_mem_cleanse) from liboqs to securely erase private-key material. KEMs implemented (OQS_KEM_keypair, OQS_KEM_encaps, OQS_KEM_decaps) with deterministic randomness for Known Answer Tests (KATs).
Modularity and Extensibility: The build system was extended with per-algorithm compilation flags (e.g., -DOQS_ENABLE_SIG_haetae_3=ON), enabling partial builds for resource-constrained environments and simplifying future algorithm updates.
Upstream Compatibility: Code structure and API contracts were preserved to align with upstream OQS development, ensuring minimal maintenance effort and future merge compatibility.
Verification and Testing
KAT vectors from the official KpqC implementations were incorporated into the liboqs test suite. Automated tests validated correctness on x86_64 (GCC 13.2, Clang 17) and ARMv8-A (aarch64, GCC 13.2) platforms. For example, the following commands execute algorithm-specific test cases:
ctest -R haetae
ctest -R smaug
This cross-architecture validation ensured functional reliability across heterogeneous deployment environments.
3.2. Extending the OQS Provider for TLS Integration
While liboqs provided the cryptographic primitives, full TLS 1.3 integration required modifications to the oqs-provider. The stock provider lacks support for KpqC-specific identifiers, encodings, and dispatch bindings. To enable production-grade testing, the following extensions were implemented:
Algorithm Registration: Added unique identifiers for HAETAE, AIMer, SMAUG-T, and NTRU+ in the provider source, linking them to their respective liboqs API calls through dedicated OSSL_DISPATCH tables.
ASN.1 and OID Support: Defined and registered new object identifiers (OIDs) within OpenSSL’s ASN.1 module to enable DER/PEM encoding, decoding, and hybrid certificate construction.
Build and Configuration: Updated build scripts and providers.conf to allow dynamic loading of the extended provider via the -provider oqsprovider option.
End-to-End Validation: Verified TLS 1.3 handshakes with KpqC-only and hybrid certificates on both x86_64 and ARMv8 platforms, confirming compatibility with OpenSSL’s protocol flow and certificate validation.
These enhancements transformed the OQS–OpenSSL stack into a complete experimental platform for assessing KpqC in realistic PKI workflows.
3.3. Measurement Procedure
Two categories of overhead were evaluated:
3.3.1. Static Overhead: Certificate Size
DER-encoded X.509 certificates were generated for the following:
ECC-only (P-256, P-384);
PQC-only (HAETAE, AIMer, SMAUG-T, NTRU+);
Hybrid ECC+PQC (composite public key and signature).
Certificate sizes were measured using:
Hybrid certificates followed the composite format described in [
24], embedding multiple
SubjectPublicKeyInfo structures in a single ASN.1 sequence.
3.3.2. Dynamic Overhead: TLS 1.3 Handshake Latency
Handshake latency was measured using OpenSSL’s
s_server and
s_time utilities:
# Server |
openssl s_server -accept 8443 -cert server.crt -key server.key \ |
-tls1_3 -www -provider oqsprovider |
|
|
# Client (200 sequential TLS 1.3 handshakes) |
for i in $(seq 1 200); do |
openssl s_client -connect 127.0.0.1:8443 -tls1_3 -groups \ |
smaug_t1 -provider oqsprovider |
done |
Two scenarios were tested:
Computation-Bound: Both endpoints on a MacBook Pro (M1 Pro, 16 GB RAM, macOS 14.4) using localhost.
Network/Resource-Constrained: Server deployed on a Raspberry Pi 5 (Cortex-A76, 8 GB RAM, Ubuntu 24.04), with the client running on a MacBook Pro over IEEE 802.11ac LAN. This setup models an IoT/edge deployment, where lightweight edge devices terminate TLS sessions with resource-rich cloud clients, thereby reflecting asymmetric computational and networking constraints in real-world PQC adoption scenarios.
For statistical reliability, each scenario was repeated in five independent runs to account for run-to-run variability.
4. Results and Analysis
Building on the implementation described in
Section 3, this section presents a comprehensive evaluation of the integrated PQC and hybrid TLS framework. We analyze both
static overhead, in terms of DER-encoded certificate size, and
dynamic overhead, measured as TLS 1.3 handshake performance in controlled computational and networked scenarios. Overall, our results show that KpqC certificates are between 11.5× and 48× larger than their ECC counterparts, while handshake latency increases by approximately 8–9× in computation-bound localhost experiments. In contrast, in LAN experiments the relative overhead remains below 40%. To validate these findings, we report dispersion measures including medians, interquartile ranges (IQR), and 95% confidence intervals in
Appendix A Table A1.
For all TLS 1.3 experiments, the server authentication certificate was fixed to ECDSA with P-256 (secp256r1) across all configurations. This choice was made to ensure fairness and to isolate the performance impact of the key exchange (KEM), while keeping the signature component constant. As a result, the reported handshake latencies primarily reflect the contribution of the key exchange, rather than variations in signature size or verification cost.
4.1. Experimental Setup
Performance was evaluated in two environments. In the computational scenario, both the client and server ran on a single MacBook Pro (M1 Pro, 16 GB RAM) to measure pure cryptographic overhead without network latency. In the networked scenario, the client (MacBook Pro) communicated over IEEE 802.11ac Wi-Fi with a Raspberry Pi 5 server (ARM-A76, 8 GB RAM) configured with a static IP. Both systems ran OpenSSL 3.1.2 extended with a modified OQS provider supporting additional Korean post-quantum cryptographic (KpqC) algorithms.
The integration process involved adding new KEM and signature schemes via generate.py templates, defining parameter sets and API bindings, and updating provider configuration files to register them in OpenSSL’s fetch mechanism. This ensured full compatibility for classical, PQC-only, and hybrid key exchange without modifying TLS 1.3 handshake logic.
4.2. Static Overhead: Certificate Size
X.509 certificates were generated for all tested signature schemes, and their DER-encoded sizes were measured.
Table 3 summarizes the results. Classical ECC certificates were the most compact, with secp256r1 at 385 B. Among PQC schemes at Level 1, Falcon512 produced the smallest certificate (1788 B), representing a 4.6× increase over ECC. KpqC signatures (e.g., aimer128s, aimer128f) were substantially larger, up to 16× the ECC baseline.
Hybrid certificates, combining classical and PQC signatures, exhibited only a marginal size increase over their PQC-only counterparts (typically < 2%), indicating that PQC signature data dominates overall size growth.
Figure 3 illustrates the comparison for representative Level 1 and Level 2 algorithms.
4.3. Dynamic Overhead: TLS Handshake Performance
TLS 1.3 handshake times were measured over 200 consecutive connections. In the localhost setup (
Table 4), both NIST PQC (ML-KEM) and KpqC hybrids exhibited similar computational cost, with handshake times increasing by approximately 8–9× relative to ECC. In the networked scenario (
Table 5), the performance gap between the two families was negligible, with relative overhead remaining under 40%. In the networked scenario (
Table 5), absolute handshake times increased across all algorithms due to network latency. The relative overhead of PQC and KpqC hybrids dropped significantly (to 30–40%) compared to localhost results, indicating that network delay masks much of the cryptographic cost. KpqC hybrids showed performance comparable to NIST PQC hybrids at the same security level.
5. Conclusions
This study presented a comprehensive evaluation of static and dynamic overheads when integrating PQC into the X.509/TLS framework and KpqC algorithms with NIST-standardized counterparts. Certificate sizes increased by approximately to , from Falcon to AIMer. In pure computational scenarios, both ML-KEM and KpqC hybrids exhibited handshake latencies roughly 8– higher than ECC baselines. Under realistic network conditions, KpqC and NIST PQC hybrids showed similar performance.
These results underscore a trade-off between bandwidth/storage overhead and computational cost, providing guidance for algorithm selection based on deployment constraints. Compact NIST PQC schemes may suit bandwidth- or storage-constrained environments, whereas high-capacity systems can adopt KpqC algorithms when their cryptographic properties are desirable.
Future work will explore alternative hybrid frameworks (e.g., chameleon-style designs) and assess additional metrics such as certificate path validation time. Benchmarking in wide-area networks with varying latency, jitter, and packet loss will further clarify PQC deployment challenges in real-world conditions.