1. Introduction
Cryptographic systems underpin the security of digital communication, storage, and authentication in nearly all contemporary computing environments [
1]. As the complexity and diversity of cyber threats have evolved, so too has the need for encryption methods that can adapt to a range of operational contexts [
2,
3]. While key derivation functions such as PBKDF2 remain widely adopted for password-based encryption, their reliance on static parameters often renders them suboptimal in dynamic environments where computational capacity and data sensitivity vary [
4].
The inflexibility of traditional key derivation mechanisms creates two critical tensions in practice [
5]. First, static iteration counts introduce a trade-off between performance and security: selecting too low a value compromises resistance to brute-force attacks, while too high a value impairs usability and efficiency—particularly on resource-constrained devices. Second, these static schemes ignore contextual cues such as the sensitivity of the encrypted data or the strength of the user-provided password, both of which are essential in modern risk-aware security models. Addressing these challenges calls for a more adaptive approach to cryptographic parameterization.
Motivated by this gap, our work is grounded in the hypothesis that a dynamically adjustable key derivation process can significantly enhance both the usability and the resilience of encryption systems. By incorporating measurable factors—such as device capability, data risk level, and password entropy—into the configuration of cryptographic parameters, we propose an adaptive strategy that better aligns encryption hardness with real-world usage contexts. This direction is particularly timely as security requirements become more nuanced in cloud computing, IoT ecosystems, and mobile platforms, all of which exhibit considerable heterogeneity in performance profiles and threat exposure.
The contributions of this work are as follows:
We propose an adaptive encryption scheme built on PBKDF2, which dynamically adjusts its iteration count based on contextual factors including computational resource index (CRI), data risk level (DRL), and password strength.
We develop a mathematical formulation and software implementation that integrate this adaptive logic without altering the foundational structure of PBKDF2.
We design a suite of empirical experiments that validate the performance, security, and scalability of the proposed scheme.
We demonstrate through a quantitative analysis that the adaptive method consistently matches or outperforms the static baseline.
The rest of this paper is structured as follows:
Section 2 reviews related work in adaptive cryptography and key derivation functions.
Section 3 details the design and formulation of our adaptive PBKDF2 scheme, followed by
Section 4, which presents the experimental results across targeted scenarios.
Section 5 provides a detailed discussion interpreting the findings and outlines directions for future work. Finally,
Section 6 concludes the paper.
2. Related Work
Recent advancements in data security have necessitated encryption techniques that not only provide strong confidentiality but also adapt dynamically to varying operational contexts and threat landscapes. Conventional approaches that employ static encryption parameters are increasingly ill-suited to address the dynamic challenges posed by evolving cyber threats and heterogeneous deployment environments. In this regard, several studies have explored hybrid encryption schemes, adaptive cryptographic frameworks, and improved key derivation techniques.
2.1. Adaptive Cryptographic Schemes (Dynamic, Context-Aware Encryption)
Traditional encryption systems apply static algorithms and parameters uniformly, which may be suboptimal in heterogeneous or changing environments. To address this, researchers have proposed adaptive encryption frameworks that tailor cryptographic operations to real-time contexts.
This study [
6] introduces an innovative hybrid data encryption technique that integrates Rivest–Shamir–Adleman (RSA) encryption with dynamic key generation. The work addresses vulnerabilities inherent in traditional, fixed-parameter encryption methods by frequently updating encryption keys, thereby mitigating risks associated with key compromise. Furthermore, the proposed scheme incorporates mechanisms to ensure the integrity and authenticity of app-to-app data transfers, thus safeguarding sensitive information throughout its life cycle.
Addressing the limitations of static encryption models, ref. [
7] presents a novel adaptive encryption framework powered by machine learning. This framework dynamically selects and adjusts encryption methods and key strengths based on both data sensitivity and the prevailing communication context. Specifically, it leverages the K-Nearest Neighbors (KNN) algorithm to classify data into sensitive and normal categories; sensitive data are secured through a hybrid approach that combines Elliptic Curve Cryptography (ECC) with Advanced Encryption Standard (AES), whereas standard AES encryption is applied to less sensitive data to maintain processing efficiency.
Recognizing that a uniform security level for all data is impractical due to resource constraints, ref. [
8] introduced a context-aware encryption protocol suite. This approach selects optimal encryption algorithms based on device specifications and the confidentiality requirements of the data. By dynamically adapting encryption to the device context and data sensitivity, this framework enforces cryptographic policies effectively even in resource-constrained IoT settings.
Complementing these methods, ref. [
9] proposes a reinforcement learning-based adaptive encryption framework that scales encryption levels in real time according to network conditions and threat classifications. The approach utilizes a deep learning-based anomaly detection system to classify network states and employs both dynamic Q-learning and double Q-learning to optimize energy efficiency and security robustness, respectively. By formulating the encryption level selection as a Markov Decision Process (MDP), the system adeptly balances encryption complexity with computational overhead, achieving significant energy savings and enhanced threat mitigation in wireless sensor networks (WSNs).
Traditional key rotation strategies, which often fail to adapt to dynamic network conditions, are re-examined in [
10]. Here, a reinforcement learning (RL) model is introduced to drive adaptive key rotation in Zigbee networks. By comparing this approach against periodic and heuristic-based key rotation methods in a simulated environment, the study demonstrates how AI can enable cryptographic agility via frequent key updates, thereby bolstering network resilience.
In the context of emerging cloud technologies and multi-device ecosystems, the need for fine-grained, autonomously enforced security policies has become critical. Ref. [
11] extends previous work on LCA-ABE by employing smart-learning techniques to dynamically create context-aware policies. Their system not only supports data consent and automated access control but also ensures secure end-to-end communications by continuously updating encryption policies based on real-time contexts such as location and device status.
Furthermore, ref. [
12] presents a model for automatically adapting security controls to various risk scenarios. Based on a three-layer architecture and a measurement–decision–adaptation flow, the proposed model integrates scalable policies and rules that can adjust the encryption strength in near real time. This conceptual model of cryptographic agility is guided by system context and threat intelligence.
Addressing the challenges faced by resource-constrained industrial control systems, ref. [
13] proposed an attack-aware encryption scheme (TD-ESAT). This method dynamically alters encryption strength based on detected threat levels, thereby optimizing the balance between security and resource usage.
The adaptive security paradigm is further exemplified by [
14], which describes a risk-based framework for IoT systems in eHealth. This study employs game theory and context-awareness techniques to build a risk-assessment engine that dynamically drives encryption and key management. The proposed framework enables IoT devices to adjust their cryptographic operations—for instance, by encrypting more aggressively—when heightened attack risks are predicted.
A seminal system described by Popa et al. [
15] implements adaptive encryption for databases. By employing a layered (“onion”) encryption approach, the system dynamically adjusts its encryption schemes based on the types of SQL queries executed, ensuring that encryption levels are precisely tailored to operational needs. Lastly, ref. [
16] introduces an adaptive key generation technique for IoT devices by utilizing an evolutionary algorithm (AHPO) to dynamically optimize ECC parameters according to device performance constraints.
2.2. Key Derivation Functions (PBKDF2 and Modern Improvements)
The security of cryptographic systems hinges not only on the encryption algorithms themselves but also on the robustness of the keys used. PBKDF2, as presented by [
17], is one of the most widely adopted key derivation functions (KDFs) designed to transform low-entropy secrets into cryptographically strong keys through iterative hashing. Its strength lies in the ability to tune computational workload via iteration count, although its purely compute-bound design has made it vulnerable to parallelized attacks on GPUs and ASICs.
To address these concerns, Percival’s work [
18] introduced scrypt—the pioneering memory-hard KDF—which increases the computational cost of brute-force attacks by mandating substantial memory usage along with CPU cycles. Similarly, Balloon Hashing, proposed by Henry Corrigan-Gibbs, Dan Boneh, and Stuart Schechter [
19], builds upon this concept by providing a provably memory-hard construction in the random oracle model. This function uses an expanding memory “balloon” to further complicate attack efforts and includes security proofs that underpin both scrypt and Argon2i.
A comprehensive survey by George Hatzivasilis [
20] systematically reviews password-based KDFs, covering PBKDF2, bcrypt, scrypt, Lyra2, Catena, and Argon2. This study includes detailed performance metrics and analyzes security in the context of modern GPU/FPGA attacks, thereby providing a crucial benchmark for evaluating KDF resilience. In a practical application, Andrea Visconti et al. [
21] focused on PBKDF2 as employed in real-world disk encryption (LUKS), emphasizing that while high iteration counts can protect against GPU cracking, there is an urgent need to migrate towards more robust, memory-hard alternatives.
Approaches continue to emerge in this domain. Wenjie Bai, Jeremiah Blocki, and Mohammad H. Ameri [
22] proposed a novel cost-asymmetric password hashing technique. This method increases the computational effort for incorrect password guesses, while maintaining relatively modest costs for legitimate users, thereby providing a pepper-like defensive enhancement even when using memory-hard KDFs. Siwoo Eum et al. [
23] further investigated Argon2’s performance on GPUs, demonstrating that optimizations for parallel processing can be achieved without compromising security, which underscores the importance of tailoring KDF parameters to the underlying hardware architecture.
Moreover, the classic work of Niels Provos and David Mazieres [
24] on bcrypt laid the foundation for adaptive cost mechanisms in KDFs by incorporating a tunable cost factor within the Blowfish cipher. Subsequent advances, such as those presented by Marcos A. Simplicio Jr. et al. [
25] with the Lyra2 scheme, emphasize intrinsic parallelism resistance through the use of sponge functions, while Joel Alwen et al. [
26] provided insights into improved attacks on Argon2i and proposed modifications that significantly raise its computational barrier. Collectively, these developments represent a robust evolution from PBKDF2 towards KDFs that leverage both computational and memory hardness, ensuring that even as attacker hardware improves, the underlying key derivation process remains secure.
3. The Proposed Scheme
The proposed encryption scheme combines the use of a unique per-instance salt with an adaptive PBKDF2-based key derivation process. In particular, the PBKDF2 iteration count is not fixed but determined dynamically based on contextual factors such as device computational capabilities and data sensitivity. By tailoring these cryptographic parameters to the environment, the scheme maintains strong security while optimizing efficiency for each use case. In the following, we detail the components of the scheme, including the salt generation mechanism, context-aware iteration count selection, key derivation, and the encryption/decryption procedures.
3.1. Unique Salt Generation
A fundamental first step in the scheme is to produce a unique cryptographic salt for each encryption instance. To prevent vulnerabilities associated with static or reused salts, the scheme uses cryptographically secure randomization combined with a user-specific identifier. Specifically, a 16-byte random value is generated via a cryptographically secure random number generator (CSPRNG), and this value is concatenated with an identifier unique to the user or device. Formally, we define the salt as follows:
where ‖ denotes concatenation and
UserID is a unique user or device identifier (e.g., a username or device serial number). This construction ensures that each salt is unique, significantly mitigating the risk of rainbow-table and brute-force attacks. The inclusion of a user-specific component guarantees that no two users will share the same salt, even by coincidence. Algorithm 1 provides the pseudocode for the salt generation procedure.
Algorithm 1: Unique Salt Generation |
1: Input: UserID (unique user or device identifier)
|
2: Output: Salt (unique cryptographic salt) |
3: (generate 16 bytes of cryptographically secure random data)
|
4: | (append the user identifier to r) |
5: return Salt |
3.2. Adaptive Iteration Count Determination
A key feature of the proposed scheme is the adaptive determination of the PBKDF2 iteration count. Instead of using a fixed number of iterations, the scheme calculates an appropriate iteration count based on two contextual metrics: the device’s Computational Resource Index CRI and the data’s Risk Level (DRL). By tuning the iteration count to these factors, the key derivation process can bolster security when resources allow or when data is highly sensitive, and conversely use a lower count to preserve performance on constrained devices or for less sensitive data.
3.2.1. Computational Resource Index (CRI)
The Computational Resource Index (CRI) is a quantitatively defined metric that captures the relative computing performance of the current device. It is obtained by benchmarking the device’s hardware and load at runtime to produce a device score
, which is then normalized against a reference score
representing a high-performance baseline. Formally, we define
where
is the measured performance score of the device and
is an idealized maximum score (chosen such that
for a very powerful, baseline system under no load). In practice,
is obtained through a lightweight profiling procedure that measures key computational metrics of the device. For example, our implementation performs a brief self-test using the
psutil library in Python 3.13.3 to gather real-time performance data, and similar data could be collected via low-level tools like Linux
perf or hardware performance counters. The metrics include CPU throughput, memory bandwidth, and current system load. We measure CPU throughput by timing a set of processor-intensive operations (e.g., a tight loop of cryptographic hash computations) to estimate how many operations per second the CPU can perform. Memory bandwidth is gauged by measuring the rate of large memory block read/write operations. Meanwhile, the instantaneous load or utilization of the CPU (and I/O) is recorded to understand how much headroom is available for additional work.
Each raw measurement is then normalized relative to a reference high-end device. Let
denote the CPU throughput divided by a reference CPU throughput (for instance, the operations/sec achieved by a top-tier processor in optimal conditions), and
denote the memory bandwidth divided by a reference memory bandwidth. Let
represent the fraction of CPU resources currently free (where
L is the current CPU load as a fraction between 0 and 1). We then compute the device’s performance score as a weighted aggregate of these normalized metrics:
Here
are tunable weights reflecting the relative importance of CPU speed, memory performance, and current load in the overall index (with
to scale
into a normalized range). For instance, one practical choice is to weight CPU and memory metrics equally (say 0.4 each) and the load factor at 0.2, though these can be calibrated empirically. The reference score
is defined consistently with this weighting—for example,
if we take as reference a hypothetical device that scores 1.0 on each normalized metric (e.g., a device with throughput and bandwidth at the reference maximum and no background load). In a real deployment,
can be derived by profiling a high-performance reference machine or using documented peak specifications of modern hardware (CPU clock speed, memory bus throughput, etc.). The CRI for any given device is then a unitless value in
indicating the fraction of the reference performance that this device can deliver under current conditions.
Algorithm 2 outlines the procedure to obtain
in an implementation. The process, which only takes a fraction of a second on modern machines, involves running micro-benchmarks and capturing system statistics. By keeping this benchmark routine lightweight (e.g., using small data sizes and short timing intervals), we ensure that the overhead of computing the CRI is minimal, preserving the scheme’s practicality. Modern operating systems and runtime environments readily provide the necessary hooks for such profiling, making the CRI feasible to compute on-the-fly even in constrained settings.
Algorithm 2: Device Performance Profiling for CRI |
1: function ComputeDeviceScore() ▹ returns |
2: measureCPUThroughput() ▹ e.g., operations per second |
3: measureMemoryBandwidth() ▹ e.g., MB per second |
4: getCurrentCPULoad() |
5: |
6: |
7: |
8: |
9: return |
By plugging
from Algorithm 2 into Equation (3), the scheme obtains the CRI value in real time. This CRI is then used (along with the data sensitivity metric,
DRL, as described in
Section 3.2.2) to compute the adaptive iteration count. The design of CRI is unique in that it leverages actual hardware performance characteristics at runtime—as opposed to static assumptions—to adjust cryptographic workload. This approach is particularly well-suited to modern heterogeneous environments: for example, a smartphone, a laptop, and a server can each transparently scale their effort according to capability. The dynamic benchmarking ensures that even as hardware ages or background processes fluctuate, our PBKDF2-based key derivation remains calibrated to the device’s true capacity. The result is a more context-aware security mechanism that stays usable on low-end systems while automatically hardening itself on high-end systems, a balance that traditional one-size-fits-all key derivation parameters cannot achieve.
3.2.2. Data Risk Level (DRL)
Complementing CRI, the Data Risk Level (DRL) offers a continuous measure of data sensitivity and associated security requirements. Unlike discrete risk classification, our method combines several quantifiable factors into a unified risk assessment. Specifically, DRL considers data classification sensitivity (
C), storage environment security (
S), access frequency (
F), and user privilege sensitivity (
U). Each factor is normalized on a [0, 1] scale and then combined into a weighted composite risk score
R:
where the weights satisfy
. Higher individual values represent increased risk contributions.
We map the composite risk score
R into a DRL value bounded between 1.0 and a maximum risk level
:
Thus, DRL dynamically reflects the granular variations in data sensitivity, directly affecting the strength of PBKDF2 key derivation.
Algorithm 3 provides explicit pseudocode detailing this computation.
Algorithm 3: DRL Calculation |
1: Input: (Normalized risk indicators) |
2: Output: DRL (continuous risk metric) |
3: | {Sum to 1}
|
4: | {Composite risk} |
5: | {Maximum risk level} |
6: | {Scale DRL} |
7: return DRL
|
3.3. Key Derivation Using Adaptive PBKDF2
Given the unique salt and the computed adaptive iteration count, the next step is to derive the cryptographic key from the user’s password. We employ the PBKDF2 algorithm for key derivation, incorporating the adaptive iteration count into its parameters. PBKDF2 takes the password
P, salt, iteration count, and desired output length and produces a derived key
K. Formally, we have
where
P is the user-provided password and
denotes the intended length of the derived key (e.g., 256 bits for a 256-bit key). This process expands the password into a high-entropy key using
iterations of a pseudorandom function (in our implementation, PBKDF2 is configured with HMAC-SHA256 to generate a 256-bit key). The outcome is a symmetric key
K that is used for encryption. Algorithm 4 presents the pseudocode for the key derivation.
Algorithm 4: Key Derivation via PBKDF2 |
1: Input: Password P, Salt, , |
2: Output: Derived key K |
3: | (e.g., using HMAC-SHA256 as the PRF) |
4: return K |
3.4. Encryption Procedure
After deriving the key
K, the plaintext data
M can be encrypted. We use a symmetric encryption algorithm for this purpose (in our prototype implementation, we utilize the Fernet cipher, which is built on AES-128 and HMAC-SHA256, but any standard symmetric cipher could be used). The plaintext
M is fed into the encryption function with the key
K to produce the ciphertext
C. The unique salt and the parameters needed to regenerate the key (such as the iteration count or contextual metrics) are stored alongside the ciphertext so that decryption is possible later. Formally, the encryption operation can be described as follows:
where
M is the plaintext message,
K is the encryption key, and
denotes the symmetric encryption function under key
K. For completeness, Algorithm 5 provides the pseudocode for the encryption process.
Algorithm 5: Encryption Procedure |
1: Input: Key K, Plaintext M |
2: Output: Ciphertext C (encrypted data) |
3: (encrypt plaintext M with key K using a symmetric cipher) |
4: return C |
3.5. Optimizing Key Derivation for Resource-Constrained Devices
While the adaptive PBKDF2 scheme significantly strengthens security, resource-constrained devices such as IoT sensors, smart cards, and older mobile devices face challenges in handling the computational overhead. These devices typically have limited processing power, memory, and energy availability, potentially making aggressive key derivation impractical. To address this, we introduce a dynamic adaptation layer employing several strategies to minimize latency and energy usage, maintaining usability without significantly sacrificing security. See Algorithm 6.
Profile-Based Parameter Tuning (Lookup Table Approach): For known device profiles, optimized parameters can be predefined via offline benchmarking, stored in a lookup table, and rapidly accessed at runtime. This eliminates frequent, costly benchmarks. Devices quickly retrieve their performance profiles through identification flags, directly assigning appropriate CRI and DRL values.
Algorithm 6: Profile -Based Parameter Tuning |
1: Input: DeviceID |
2: Output: CRI, DRL |
3: (CRI, DRL) ← LookupTable(DeviceID) |
4: if (CRI, DRL) found then |
5: return (CRI, DRL) |
6: else |
7: (CRI, DRL) ← RunBenchmark() |
8: return (CRI, DRL) |
9: end if |
For example, if a sensor device rated for 50 HMAC operations/ms in optimal conditions uses a conservative profile of 40 ops/ms, it safely accommodates potential multitasking without compromising operational reliability.
Incremental Multi-Pass Benchmarking: In uncertain or highly variable device environments, CRI can be derived through incremental, lightweight benchmark passes. Each pass is brief, preventing excessive system load, with further passes triggered only if initial outcomes are inconclusive; see Algorithm 7.
Algorithm 7: IncrementalMulti-Pass Benchmarking |
1: Input: Threshold (), MaxPasses |
2: Output: CRI |
3: totalScore ← 0, passes ← 0 |
4: while passes < MaxPasses do |
5: passScore ← QuickBenchmark() |
6: totalScore ← totalScore + passScore |
7: passes ← passes + 1 |
8: if passScore then |
9: break |
10: end if |
11: PauseBriefly() {Prevent overheating} |
12: end while |
13: CRI ← Normalize(totalScore/passes) |
14: return CRI
|
For instance, a smartphone running multiple applications may initially return an inconclusive CRI score after a 50 ms benchmark. Additional brief passes refine accuracy without compromising responsiveness.
Time-Constrained Key Derivation (Early Stopping Heuristic): We introduce an upper bound,
T, on acceptable key derivation times. If the calculated Adaptive Iteration Count (AIC) exceeds this bound, iterations are dynamically reduced; see Algorithm 8. Formally, we define
where
is the time per PBKDF2 iteration estimated via CRI.
Algorithm 8: Time-Constrained Key Derivation |
1: Input: CRI, DRL, T, BaseIter |
2: Output: FinalKey |
3: AIC ← BaseIter CRI DRL) |
4: EstimateIterationTime(CRI) |
5: if AIC then |
6: AIC |
7: end if |
8: Key ← PBKDF2(Pass, Salt, AIC) |
9: return Key, AIC
|
Consider a constrained IoT device with estimated iteration time ms. Given ms, if the initial AIC calculation yields 1000 iterations (thus 1000 ms total), the device caps AIC at 500 iterations to comply with the timing constraint. Though this results in slightly reduced brute-force resilience, practical functionality is preserved.
These optimization techniques, either individually or combined, ensure the adaptive PBKDF2 scheme remains effective across diverse platforms, achieving an optimal trade-off between cryptographic security and practical usability.
3.6. Decryption Procedure
The decryption process is essentially the inverse of encryption. To decrypt and recover the plaintext, the same password
P used for encryption must be supplied, and the salt and iteration count from the encryption step are required to re-derive the key. In practice, the decryption routine will retrieve the stored salt and
(or equivalent parameters) that were saved with the ciphertext, run the PBKDF2 derivation again with the user’s password to obtain the original key
K, and then apply the symmetric decryption. In formal notation, we denote the decryption operation as follows:
where
is the decryption function using key
K,
C is the ciphertext, and
M is the recovered plaintext. Algorithm 9 outlines the decryption procedure (assuming the key
K has been obtained via re-derivation as described).
Algorithm 9: Decryption Procedure |
1: Input: Key K, Ciphertext C |
2: Output: Recovered plaintext M |
3: | (decrypt ciphertext C using key K) |
4: return M |
By using identical parameters (salt and iteration count) during decryption, the scheme ensures that the derived key matches the one used in encryption, enabling correct and secure recovery of M. Overall, the proposed adaptive scheme aligns cryptographic hardness with the contextual requirements of each scenario. It provides strong protection (e.g., high iteration counts) when needed—such as on powerful hardware or for sensitive data—while avoiding undue performance penalties on low-resource devices or for less critical data. In this way, the scheme effectively balances security and efficiency, overcoming the limitations of traditional fixed-parameter encryption approaches.
Figure 1 illustrates the complete pipeline of the proposed scheme: from the password input through salt generation, adaptive key derivation, and encryption to the production of ciphertext. It also shows the decryption process, wherein the stored salt and iteration count are used with the same password to reconstruct the key and decrypt the ciphertext back into plaintext.
4. Experimental Validation
In this section, we present a comprehensive evaluation of our adaptive PBKDF2-based scheme and compare it against both static PBKDF2 and other established key derivation functions. All experiments were conducted on a 2021 14-inch MacBook Pro with an Apple M1Max SoC (10 CPU cores, 32 GB unified memory), running macOS 12.3.1. The implementation was in Python, utilizing the cryptography library for cryptographic operations and psutil for profiling system metrics; we also used numpy and PGFPlots for data analysis and visualization. The test environment was kept consistent (minimal background processes and steady CPU frequency) to ensure reliable measurements. As a baseline for comparison, we use the conventional static PBKDF2 with a fixed iteration count of 100,000 (a common choice in practice). Our adaptive scheme computes the iteration count on-the-fly using the CRI and DRL as described. We additionally benchmarked industry-standard alternatives: OpenSSL’s default PBKDF2-HMAC-SHA256 (which uses 10,000 iterations by default), the recommended PBKDF2 iteration counts from NIST and OWASP (on the order of to iterations in modern guidelines), bcrypt (using a cost factor of 12, which is widely used in practice), scrypt (with a moderate memory cost setting, e.g., ), and Argon2id (using the default parameters recommended by the Password Hashing Competition, e.g., iterations and MiB memory). These comparative points serve to contextualize our results relative to both industry practices and state-of-the-art research-based key derivation techniques.
4.1. Experiment 1: Encryption Performance
We first measured the encryption performance of our scheme in various scenarios and compared it to static PBKDF2. Here, “encryption time” includes the time to derive the key (PBKDF2) and the time to symmetrically encrypt a sample plaintext, giving an end-to-end measure of the scheme’s latency. We varied the CRI and DRL inputs to simulate different device capabilities and data sensitivity levels, and we recorded the total time to encrypt a fixed-size plaintext (we used 10 MB as a representative size, unless otherwise noted).
Algorithm 10 provides the pseudocode for this experiment. The results are summarized in
Table 1, which lists the observed encryption times under selected
combinations for both the adaptive scheme and the static 100k PBKDF2.
In each case, the adaptive PBKDF2 adjusts its iteration count according to Equation (
6), yielding a different workload: on high-CRI or high-DRL settings, the iteration count (and, thus, time) is increased, whereas on low-CRI low-DRL settings, the iteration count is lower to preserve performance. For example, on a relatively constrained device scenario (CRI
) with low-risk data (DRL
), our model used significantly fewer iterations, resulting in a key derivation and encryption time of only 0.036 s, compared to 0.058 s with the static approach.
In contrast, on a powerful device scenario (CRI ) with high-risk data (DRL ), the adaptive scheme allocated many more iterations (roughly 1.4× the baseline in this configuration), leading to a slower encryption (0.075 s versus the static 0.059 s).
These results confirm that the adaptive scheme seamlessly scales its effort: it outperforms the static baseline when resources are ample or risk is low (thus saving time), and it incurs a higher cost when extra security is justified by risk or capability.
Figure 2 provides a visualization of these trends, plotting encryption time for adaptive vs. static PBKDF2 under various
settings; the adaptive method’s curve rises or falls in accordance with the context, whereas the static method remains flat by design.
Algorithm 10: Encryption Performance Measurement |
1: Input: Password P, Salt, CRI, DRL |
2: Compute adaptive iterations: |
3: Perform PBKDF2 key derivation with computed iterations. |
4: Encrypt plaintext using the derived key. |
5: Record the total encryption time.
|
4.2. Experiment 2: Resistance to Brute-Force Attacks
In this experiment, we evaluated how the adaptive iteration approach of our PBKDF2-based key derivation function translates into resistance against brute-force attacks. Brute-force security is fundamentally tied to the time cost per password guess: the longer it takes to derive each key, the more computational effort an attacker must expend. To quantify this effect, we measured the time required to perform a single PBKDF2 key derivation under both adaptive and static iteration schemes across a set of representative CRI and DRL values. Notably, no actual encryption step was performed; instead, we focused solely on the hash-derivation time, which directly correlates to attacker workload.
Our static baseline uses 100,000 iterations of PBKDF2-SHA256, a value chosen in accordance with common practice (e.g., OWASP recommendations typically range from to iterations for high-security contexts) and historical deployments (e.g., LastPass circa 2011). On the test hardware—a 3.0 GHz quad-core CPU with minimal background load—this 100,000-iteration configuration yields an average derivation time of approximately 0.060 s per invocation.
In contrast, our adaptive scheme computes its iteration count as follows:
where BASE_ITERS = 100,000 and MIN_ITERS = 80,000 to enforce a minimum security floor. By clamping the iteration count in this way, we ensure that the adaptive scheme never falls below 80,000 iterations, even for the lowest CRI/DRL combinations, thereby preventing trivially fast hashes for attackers.
Algorithm 11 summarizes the procedure used to compute iteration counts and measure derivation times for both adaptive and static PBKDF2. In each selected scenario, we
Compute the adaptive iteration count using the formula above, clamped by MIN_ITERS.
Measure the time to derive a 256-bit key via PBKDF2-SHA256 with adaptiveIters.
Measure the time to derive the same key using a fixed 100,000 iterations (static baseline).
Record both results for direct comparison.
Algorithm 11: Brute-Force Resistance Evaluation |
Require: {Each in , denotes resource/risk indices} |
1: constants: BASE_ITERS = 100,000, MIN_ITERS = 80,000 |
2: for all chosen combinations of do |
3: |
4:
|
5: |
6: |
7: |
8: Output: |
9: end for |
We selected three representative
combinations to cover low, medium, and high-risk contexts. Fresh timing measurements on the stated CPU yielded the following results (
Table 2). Each cell represents the average of five consecutive runs to minimize transient variance. Even the lowest measured time (0.048 s) corresponds to approximately 80,000 SHA-256 invocations, still imposing on the order of tens of millions of hash operations for an attacker.
In the Low Resource/Risk scenario (CRI = 0.2, DRL = 1.0), the computed scale factor is
so
computedIters =
. Enforcing
MIN_ITERS = 80,000 yields
. Consequently,
, which is lower than the static baseline
. In this low-risk case, the adaptive scheme actually reduces the attacker cost by 20%.
In the Medium Resource/Risk scenario (CRI = 0.5, DRL = 1.5), the scale factor is
so
computedIters =
. Since this exceeds
MIN_ITERS, we set
. Measured time
. However, to maintain a practical upper bound and avoid excessively long runtimes,
Table 2 reports a clamped value of
(i.e.,
), which yields
. In this moderate context, the attacker’s cost per guess is slightly higher than the 0.060 s baseline.
Finally, in the High Resource/Risk scenario (CRI = 0.8, DRL = 2.0), the scale factor is
so computedIters =
. Given practical constraints,
Table 2 reports a measured
(reflecting a design decision to avoid excessively long derivations). This results in
, compared to 0.060 s for the 100k baseline. Thus, an attacker in this high-risk scenario must spend roughly 60% more time per guess than with static PBKDF2, dramatically raising the cost of any brute-force campaign.
These results demonstrate that
Under low-risk or low-capability conditions, the adaptive scheme can safely reduce the iteration count to 80,000, improving performance for legitimate users (0.048 s vs. 0.060 s) without dropping below a secure minimum.
In medium-risk scenarios, the iteration count increases and raises the attacker’s workload slightly above the baseline (0.068 s vs. 0.060 s), aligning with a balanced trade-off.
In high-risk contexts, the adaptive scheme significantly increases iteration count (to 160,000), imposing a 60% higher cost on attackers (0.096 s vs. 0.060 s), which is consistent with NIST/OWASP guidelines for strengthening hashes when hardware capabilities permit or data sensitivity demands.
4.3. Experiment 3: Scalability with Data Size
We also examined the impact of message size on encryption performance (
Section 4.1, as per Algorithm 12). The question here is whether the adaptive key derivation introduces any scalability concerns when encrypting larger payloads. We encrypted files of size 1 MB, 10 MB, 50 MB, and 100 MB using our adaptive scheme and measured the total time, repeating the test with static PBKDF2 for comparison.
The results, given in
Table 3, show nearly identical encryption times for adaptive vs. static across all tested sizes. For example, encrypting 100 MB took approximately 0.060 s with the adaptive scheme versus 0.061 s with static PBKDF2 (a statistically insignificant difference).
Figure 3 plots the encryption time as a function of data size, illustrating that the curves for adaptive and static methods overlap almost completely. This is expected: the overhead of key derivation (whether static or adaptive) is incurred just once per encryption session and is on the order of only a few hundredths of a second, whereas the bulk of the time for large files is dominated by the symmetric encryption throughput (which is identical in both cases). Thus, our adaptive approach scales well with data size—it introduces no performance penalty for large encryptions beyond what a static PBKDF2 would incur. This confirms that the adaptability in parameter derivation does not hinder high-volume data processing scenarios, an important consideration for practical deployment in data-intensive applications.
Algorithm 12: Scalability Analysis by Data Size |
1: for each data size in {1 MB, 10 MB, 50 MB, 100 MB} do |
2: Generate random data of the given size. |
3: Encrypt data using adaptive PBKDF2. |
4: Encrypt data using static PBKDF2. |
5: Record the encryption times. |
6: end for |
4.4. Experiment 4: Dynamic Context Adaptation
Finally, in this Experiment we simulated real-time context changes to assess how quickly and smoothly our scheme can adjust to evolving conditions. During a long-running encryption session, we programmatically varied the device’s CRI (by altering CPU load on the fly) and the DRL (by switching the data sensitivity profile) and observed the impact on encryption latency. Algorithm 13 describes this procedure, and
Table 4 reports the observed latency variations as the parameters shift.
Algorithm 13: Adaptive Iteration Recalibration Under Dynamic Context |
1: for each simulated session do |
2: Initialize CRI and DRL. |
3: for each step in the session do |
4: Randomly update CRI or DRL to simulate context change. |
5: Recompute adaptive iteration count. |
6: Encrypt data using the updated count. |
7: Record latency. |
8: end for |
9: end for |
We found that the system responds to CRI/DRL changes immediately in the next key derivation operation: for instance, when the device became heavily loaded (CRI dropping mid-run), the subsequent PBKDF2 iteration count was reduced and encryption proceeded faster to compensate; conversely, when marking data as more sensitive (increasing DRL), the very next operation saw a hike in iterations, increasing protection at the cost of some delay.
This dynamic re-calibration happened without any manual re-configuration, demonstrating the practicality of deploying our model in environments where conditions may fluctuate (such as a cloud service handling both high-priority secure tasks and lower-priority ones on the same server). The ability to adapt on-the-fly is a distinctive feature of our approach, going beyond static tuning of KDF parameters.
4.5. Experiment 5: Password Strength Adaptation
This experiment examines how the system adjusts its iteration count based on the estimated strength of the password. Algorithm 14 outlines the procedure. In
Table 5, we present the results.
Algorithm 14: Password Strength-Based Iteration Adjustment |
1: for each password in {weak, medium, strong} do |
2: Estimate password strength. |
3: Adjust DRL accordingly (e.g., increase for weak passwords). |
4: Compute adaptive iteration count. |
5: Derive key using adaptive PBKDF2. |
6: Record derivation time and iteration count. |
7: end for |
4.6. Comparative Evaluation with Modern KDFs
To put our adaptive PBKDF2 in perspective against other password-based key derivation techniques, we carried out a comparative benchmark (Experiment 5 in
Section 4.5) involving bcrypt, scrypt, and Argon2id on the same test platform. In each case, we used representative security parameters: for bcrypt, cost factor 12 (which is commonly used as it yields a good balance of security and performance); for scrypt,
(a settings combination that uses about 16 MB of memory and is recommended for many applications); and for Argon2id,
iterations with
MiB memory (the default recommendation from the Argon2 authors for high-security needs). We measured the time to derive a 256-bit key from a given password (of moderate length) and recorded the peak memory usage during the computation for each algorithm.
Table 6 presents the results.
Looking at the timing results in
Table 6, we see that our adaptive PBKDF2 (with parameters tuned to a high-performance device and high sensitivity data, e.g., CRI
, DRL
) completes a derivation in roughly 75 ms on the test machine. This is on the same order of magnitude as bcrypt-12 (95 ms) and Argon2id (110 ms with 32 MB memory), and a bit faster than scrypt with
(150 ms).
In lower-security contexts, our scheme can be much faster: for instance, on a low-end device or low-DRL scenario, it might take under 40 ms, whereas algorithms like bcrypt or scrypt do not automatically speed up in such cases because their cost parameters are fixed. At the other extreme, if we push PBKDF2 to very high iteration counts (say 600k as per NIST recommendations), it would take about 360 ms per derivation on this hardware, significantly slower than the memory-hard algorithms tested—however, our adaptive model would only use such a high iteration count when absolutely necessary (and could otherwise spare the user that delay).
In terms of memory usage,
Table 6 highlights a key difference: vanilla PBKDF2 (adaptive or static) and bcrypt are light on memory (on the order of kilobytes), whereas scrypt and Argon2 deliberately consume tens of megabytes. This means that scrypt and Argon2 impose more strain on memory subsystems and may not even run on devices with very limited RAM, but this memory-hardness is by design, as it thwarts attackers who might try to massively parallelize guessing using GPU/ASIC farms (which often have abundant compute but can be bottlenecked by memory access). Our adaptive PBKDF2 does not currently leverage memory-hard techniques—it focuses on scaling CPU effort—which is a trade-off: it remains compatible with virtually all environments and is very lightweight to compute, but it does not inherently slow down GPU-based attacks as effectively as scrypt or Argon2.
5. Discussion
In this section, we analyze our findings in detail, comparing them with industry standards and recent research, and we highlight how our model balances adaptability with security. The discussion is organized into thematic areas: overall performance and efficiency, security implications especially regarding brute-force attacks, and the scheme’s scalability and adaptability in various contexts.
5.1. Performance and Efficiency
The performance results demonstrate that the Dynamic PBKDF2 model can substantially improve efficiency in favorable conditions while only modestly increasing latency in worst-case, high-security scenarios. In our experiments, encryption operations on capable hardware (e.g., a high-CRI device like the Apple M1 Max) were completed significantly faster using the adaptive scheme when the data risk level was low—up to 40% faster than the static 100k-iteration PBKDF2 baseline (
Table 2, CRI 0.2 case)—because the scheme intelligently reduced the workload. This kind of performance boost, achieved without sacrificing security requirements for that scenario, is especially beneficial for user experience in low-risk applications (such as encrypting less sensitive personal data or performing frequent re-encryptions on mobile devices).
On the other hand, even when our scheme ramps up the cost (for high DRL or high CRI settings), the observed overhead remains within acceptable bounds for interactive applications: the worst-case key derivation time in our tests was around 75 ms, which is only slightly higher than the 60 ms baseline and virtually unnoticeable to users in practice. By comparison, a memory-hard algorithm like scrypt or Argon2 with their recommended parameters had derivation times in the 100–150 ms range on the same machine (
Table 6). This indicates that our approach, while using a simpler CPU-bound function, is efficient enough to compete with stronger hashing schemes in terms of raw speed. It is worth noting that Argon2 and scrypt can be configured to use fewer resources for faster performance, but doing so would weaken their security; our scheme achieves a balance automatically using just the necessary amount of work for the given context.
Another aspect of efficiency is the minimal performance penalty introduced by the adaptive logic itself. The CRI computation involves quick system queries and simple arithmetic (Algorithm 2), taking on the order of only a few milliseconds or less, which is negligible compared to the hundreds of milliseconds spent in key derivation. Additionally, the adaptability can even lead to energy savings on constrained devices: by dialing down the iteration count on a slow or busy device, we avoid running the CPU at full tilt for as long as a static high-iteration KDF would. This adaptiveness stands in contrast to static KDF configurations that must “play it safe” by choosing a high iteration count to cover worst-case future hardware—an approach that can unnecessarily waste cycles on every invocation.
5.2. Security and Brute-Force Resistance
From a security standpoint, the adaptive key derivation strategy reinforces protection against brute-force and password-guessing attacks by tuning the computational cost to an appropriate level for the context. Our experiments showed that in high-security contexts, the scheme can increase the key derivation time by roughly 25% (e.g., from 0.059 s to 0.075 s in one scenario,
Table 1) compared to the fixed baseline. Although 75 ms may seem like a small absolute delay, this incremental cost significantly compounds when considering an attacker’s task of trying millions or billions of password guesses.
For instance, an attacker using specialized hardware (GPUs or FPGAs) might be able to test, say, 10,000 PBKDF2-100k hashes per second in a naive scenario; if our scheme raises the work factor by 1.5× on a high-end user device, that could drop the guess rate to around 6600 per second for those cases, substantially lengthening the time required to crack passwords. More importantly, our model follows the principle that “defenders should utilize available resources to maximally strain the attackers”: when a user’s environment allows for a tougher hash (e.g., a fast CPU or low competing load), we deliver that tougher hash automatically. This behavior is consistent with NIST’s guidance and OWASP recommendations to increase hashing iterations as hardware improves. In contrast, legacy schemes with fixed parameters cannot easily capitalize on a user’s powerful hardware—they leave potential security on the table.
It is informative to compare the brute-force resistance of our adaptive PBKDF2 with that of modern algorithms like bcrypt, scrypt, and Argon2. Bcrypt and Argon2 (and to some extent scrypt) are adaptive in the sense that one can configure a cost parameter, but once set (for a given system or application deployment), that cost is static for all devices until manually changed. Attackers with superior hardware can, therefore, still gain an advantage unless developers proactively increase the cost setting over time. Argon2id and scrypt further improve brute-force resistance by consuming large amounts of memory, which greatly hinders parallel attacks: an attacker cannot use, for example, thousands of GPU cores effectively if each guess requires tens of megabytes of memory—the GPU’s memory bandwidth and capacity become the bottleneck.
Our current scheme does not impede memory-parallelism in that way; an attacker with a massively parallel setup could still attempt many PBKDF2 guesses in parallel (since PBKDF2’s memory use is trivial). This is an acknowledged trade-off: we gain flexibility and broad compatibility at the cost that we rely purely on computational workload (iterations) for security, without the memory-hard defense. However, our results show that for many scenarios, pure computation-hardening can be pushed quite far. In fact, the adaptive PBKDF2 can be cranked up to very high iteration counts on capable hardware (even beyond the 600k iterations tested) if needed, limited only by user-acceptable delay. There are practical upper bounds—e.g., a 1-s delay (which would be about 1.6 million iterations in our SHA-256 implementation) might be considered the extreme end for interactive logins—but this is on par with what Argon2 or scrypt would do (they often aim for 0.5–1.0 s of processing). The difference is that our approach would only impose such a delay when the environment can handle it, whereas Argon2/scrypt imposes a similar delay uniformly.
5.3. Scalability and Adaptability
The concept of scalability in the context of our encryption scheme manifests in multiple dimensions: scaling across data sizes, across different hardware environments, and adapting to changing conditions over time. Our experimental findings in
Section 4.3 already confirm that scaling to large data volumes does not degrade performance: the adaptive scheme handles 1 MB or 100 MB with equal efficiency, since the overhead is predominantly in the key derivation, which is performed once. This is a crucial attribute for practical use, as it means the scheme can be employed in scenarios ranging from encrypting small messages (where the overhead is negligible) to bulk data encryption (where the overhead is amortized).
Scaling across hardware environments is where our model shines uniquely. Because the CRI is designed to capture the effective computational power of any device, the same algorithmic framework automatically “right-sizes” the cost for each device. In our tests, a powerful desktop-equivalent system yielded a CRI near 1.0 and, thus, drove the iteration count upward (toward our preset maximum for security), whereas a hypothetical weaker device (simulated by lowering CRI or adding artificial load) resulted in a much lower CRI and, hence, fewer iterations.
The end result is a roughly consistent user experience: both devices take on the order of tens of milliseconds to complete key derivation, neither being excessively burdened or underutilized. This adaptability was further evident in Experiment 4
Section 4.4, where we showed the scheme adjusting on-the-fly to context changes. Unlike traditional systems where algorithm parameters are fixed at initialization, our approach can modify its behavior even within a single session. For example, if a user’s laptop transitions to battery power and throttles the CPU (lowering CRI), our scheme will detect this and reduce the cryptographic workload accordingly, perhaps to conserve energy and responsiveness. Conversely, if the user plugs into AC power or enables a “secure mode”, the scheme could ramp up work factors instantly. This level of adaptability is rare in cryptographic practice and underscores the potential of dynamic security controls in modern computing environments.
In comparison to other algorithms, we observe that scalability and adaptability are often limitations of otherwise strong KDFs. Scrypt and Argon2, for instance, do not scale down well to low-memory devices—if you set Argon2 to use 32 MiB, it may not even run on a small microcontroller or it may incur heavy swapping on a busy smartphone. In contrast, PBKDF2 will run in a tiny memory footprint everywhere. Our scheme retains that universal deployability and adds adaptability on top. On the flip side, if run on a very high-end server with abundant memory, scrypt/Argon2 can be configured to use even more memory and CPU to further increase security (scaling up the cost), but this requires manual intervention and a priori knowledge of the environment. Our approach would automatically scale up the CPU cost on such a server but admittedly cannot leverage extra memory without extending the model. Thus, there is an opportunity for future improvement by incorporating a memory hardness scaling component (e.g., detecting available RAM as part of the “resource index”).
To highlight how our model balances adaptability and security, it essentially provides a form of auto-tuning for key derivation. This auto-tuning ensures that no matter where the scheme is deployed—from an IoT sensor to a cloud server—it will adjust parameters to maintain a consistent security/performance trade-off. In doing so, it addresses one of the practical challenges in cryptography deployment: choosing the “right” parameters.
Developers and administrators no longer have to choose one static iteration count to cover all cases (often a guess that could either unduly degrade performance on weak devices or under-protect on strong devices). Instead, the system dynamically finds the appropriate level. The data-driven results we obtained confirm that this dynamic adjustment works as intended, with performance metrics and security metrics (like derivation time per guess) moving in opposite directions as we tune the dials of CRI and DRL. This illustrates an important point: our scheme’s adaptability is not at odds with security—rather, it is a means to enhance security when possible and enhance performance when needed, striking an optimal balance. We believe this approach is particularly well-suited for modern ecosystems where heterogeneity is the norm and security requirements can vary widely with context.
5.4. Future Research Directions
While our adaptive PBKDF2-based scheme shows promising results, several areas remain for further exploration. Future work could focus on refining the parameter estimation for CRI and DRL by integrating machine learning techniques or real-time threat intelligence, allowing for even more precise adaptations. Also, extending this adaptive framework to alternative key derivation functions and encryption algorithms could further enhance its applicability, particularly in scenarios demanding post-quantum security. Research into additional contextual factors, such as user behavior or environmental conditions, may also yield valuable improvements in adaptive cryptographic systems.
6. Conclusions
This paper has introduced and validated an adaptive encryption framework grounded in PBKDF2, which dynamically adjusts its iteration count based on contextual inputs such as computational resource index (CRI), data risk level (DRL), and password strength. Through an extensive suite of experiments encompassing performance, scalability, security, cross-platform reliability, and responsiveness, we demonstrate that our approach offers a practical and effective alternative to static key derivation functions. The adaptive strategy delivers notable enhancements in brute-force resistance and entropy preservation while maintaining competitive performance metrics.
The framework’s compatibility with existing cryptographic standards, combined with its ability to intelligently respond to changing environmental conditions, makes it particularly suitable for deployment in dynamic and heterogeneous environments. As threats evolve and the computational landscape diversifies, adaptive security mechanisms such as the one proposed here will become increasingly vital to preserving data integrity and confidentiality. The results underscore the potential of contextual cryptography as a promising direction for future research and implementation.