Next Article in Journal
Global Weak Solution in a p-Laplacian Attraction–Repulsion Chemotaxis System with Nonlinear Sensitivity and Signal Production
Previous Article in Journal
Two Reducts of Basic Algebras
Previous Article in Special Issue
Inverse and Logarithmic Coefficient Bounds of Concave Univalent Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hyper-Dual Number Approach to Higher-Order Derivative Computation

Department of Mathematics, Dongguk University, Gyeongju 38066, Republic of Korea
Axioms 2025, 14(8), 641; https://doi.org/10.3390/axioms14080641
Submission received: 20 July 2025 / Revised: 1 August 2025 / Accepted: 10 August 2025 / Published: 18 August 2025
(This article belongs to the Special Issue Mathematical Analysis and Applications IV)

Abstract

This paper develops a theoretical framework for the computation of higher-order derivatives based on the algebra of hyper-dual numbers. Extending the classical dual number system, hyper-dual numbers provide a natural and rigorous mechanism for encoding and propagating derivative information through function composition and arithmetic operations. We formalize the underlying algebraic structure, define generalized hyper-dual extensions of scalar functions via multidimensional Taylor expansions, and establish consistency with standard differential calculus. The proposed approach enables exact computation of partial derivatives and mixed higher-order derivatives without resorting to symbolic manipulation or approximation schemes. We further investigate the algebraic properties and closure under differentiable operations, illustrating the method’s potential for unifying aspects of automatic differentiation with multivariable calculus. This study contributes to the theoretical foundation of algorithmic differentiation and highlights hyper-dual numbers as a precise and elegant tool in computational differential analysis.

1. Introduction

A dual number is a number of the form z = a + b ε , where a , b R are real numbers and ε is a special element with the property
ε 2 = 0 but ε 0 .
Thus, the set of dual numbers D can be formally described as follows:
D = { a + b ε | a , b R , ε 2 = 0 } .
Dual numbers were introduced and developed mainly to simplify the mathematical treatment of infinitesimal quantities, particularly for differentiation and geometric transformations. In standard arithmetic involving real or complex numbers, one does not encounter a non-zero number whose square equals zero, as these number systems qualify as integral domains: the condition a b = 0 necessitates that either a = 0 or b = 0 . However, in the realm of algebra, structures need not adhere to integral domain properties. When engaging with rings that incorporate zero-divisors, the aforementioned cancellation law fails to hold, allowing for the existence of elements a 0 such that a 2 = 0 . Such elements are classified as nilpotent (specifically of index 2 in this context).
The following are concrete examples:
(1)
Non-zero element 2 with 2 2 = 4 0 (mod 4) in Z / 4 Z ;
(2)
Non-zero element ε with ε 2 = 0 but ε 0 in R [ ε ] / ( ε 2 ) ;
(3)
Non-zero element 0 1 0 0 with 0 1 0 0 2 = 0 0 0 0 in 2 × 2 matrices ring;
(4)
Non-zero element is a residue class of x with x 2 = 0 in the quotient in coordinate ring k [ x ] / ( x 2 ) .
Each ring has zero-divisors, so nilpotent elements can appear. An element a is nilpotent if a n = 0 for some n 1 . Index 2 is the simplest case ( n = 2 ). All nilpotents are zero-divisors. Nilpotents live in the nil-radical of a ring and behave like algebraic infinitesimals. For example, the dual numbers R [ ε ] / ( ε 2 ) turn up in differential geometry: ε models a first-order infinitesimal displacement because every higher-order term ε 2 vanishes.
Schemes allow nilpotents to encode embedded points and tangent directions. The coordinate ring k [ x ] / ( x 2 ) describes a double point—two copies of a point glued together infinitesimally. Dual (or hyper-dual) numbers underpin automatic differentiation: for f ( x + ε ) , one keeps only the coefficient of ε because ε 2 = 0 . Nilpotent matrices classify Jordan blocks and control the structure of linear operators. The existence of a non-zero element with square zero is not a paradox; it simply tells you that the algebraic system you are working in is not an integral domain. Once zero-divisors are allowed, cancellation fails, and nilpotent elements—self-annihilating numbers—can and do appear.
In 1873, Clifford [1] introduced dual numbers as infinitesimal quantities (Clifford algebra) with ε 2 = 0 , studying kinematics and differential geometry. Kim and Shon [2] applied dual numbers to geometry and kinematics, particularly in dual quaternions, to simplify spatial transformations. Kim et al. [3] develops a rigorous Taylor-series calculus for functions taking values in the dual quaternion algebra with a cental nilpotent unit. They specify differentiability with the quaternionic structure and prove a Taylor expansion theorem: around any point where the function is (real/quaternionic) analytic, it admits a convergent series whose coefficients are uniquely determined by higher-order derivatives of its primal and dual parts. A dual number has extensive applications in robotics, rigid-body motion, and screw theory, effectively representing rotations and translations compactly. Dual numbers became fundamental in forward-mode Automatic Differentiation (AD): if f : R R , then automatic differentiation uses dual numbers:
f ( a + b ε ) = f ( a ) + b f ( a ) ε ,
directly extracting derivatives. Rall [4] analyzes time–memory complexity, checkpointing, and handling of loops, conditionals, and intrinsic functions, contrasting AD with finite differences and symbolic differentiation (avoids expression swell). Rall also treats higher-order derivatives (Hessian-vector products, mixed modes, Taylor propagation) and practical implementation routes. Squire and Trapp [5] contrast this with forward/central finite differences (which suffer truncation/roundoff trade-offs) and demonstrates straightforward implementation by enabling complex arithmetic in existing codes. They discuss extensions (directional derivatives, higher-order formulas) alongside limitations. It adopted alternative to finite differences that is simple, stable, and highly accurate for derivative estimation. Walther [6] presents a unified framework for computing second- and higher-order derivatives of code-defined functions using automatic differentiation (AD), emphasizing techniques and trade-offs relevant to large-scale simulation. Walther systematizes how to obtain Hessians, third- and fourth-order tensors, and derivative actions (e.g., Hessian-vector or tensor-vector products).
Quaternions extend complex numbers to four dimensions, providing a powerful, noncommutative algebra to represent rotations and orientations efficiently in three-dimensional space. A quaternion is a number of the form q = a + b i + c j + d k , where a , b , c , d R , and the imaginary units i , j , k follow these fundamental rules:
i 2 = j 2 = k 2 = i j k = 1 .
The multiplication rules for quaternions are as follows:
i j = k , j k = i , k i = j .
Thus, quaternions form a noncommutative, associative algebra denoted by the following:
H = { q = a + b i + c j + d k | a , b , c , d R } .
Hamilton [7] introduced quaternions in 1843, aiming to generalize complex numbers to three dimensions; however, he found a consistent system only in four dimensions. Maxwell, Gibbs, and Heaviside initially used it to describe rotations and electromagnetism; it was later supplanted by vector calculus (Gibbs–Heaviside vector analysis). Since the 20th century, robotics, computer graphics, and aerospace engineering have gained renewed interest due to the compactness and efficiency of quaternions in representing rotations in three-dimensional space, making them essential to modern technologies. Szirmay-Kalos [8] develops the algebraic framework for higher-order duals, proving closure under composition and giving recurrence formulas for propagating derivatives through addition, multiplication, division, and standard intrinsic functions. Also, Szirmay-Kalos presents seeding strategies for multivariate functions to recover mixed partials/Hessian tensors, including multidirectional seeds.
Hyper-dual numbers extend dual numbers by introducing a second nilpotent element, allowing for the efficient and precise calculation of both first- and second-order derivatives simultaneously. This algebraic structure is crucial to advanced computational techniques, especially in automatic differentiation and optimization. A hyper-dual number is defined as an extension of real numbers using two distinct nilpotent elements, ε 1 , ε 2 , each satisfying the following:
ε 1 2 = 0 , ε 2 2 = 0 , but ε 1 ε 2 = ε 2 ε 1 0 .
Thus, a hyper-dual number z is expressed as follows:
z = a + b ε 1 + c ε 2 + d ε 1 ε 2 , a , b , c , d R .
Formally, the algebra of hyper-dual numbers D 2 can be expressed as follows:
D 2 = { z = a + b ε 1 + c ε 2 + d ε 1 ε 2 | a , b , c , d R } .
Hyper-dual numbers can be seen as an extension of dual numbers designed explicitly to simplify the computation of second-order derivatives without truncation error or numerical approximation. In automatic differentiation (AD), derivatives—especially second-order—are computed accurately without numerical approximations or round-off errors. In optimization, Hessian and gradient calculations are utilized in optimization algorithms. In computational fluid dynamics (CFD), efficient sensitivity analysis is essential for aerodynamic design. The computation of derivatives lies at the heart of many branches of mathematics, including analysis, differential geometry, and numerical methods. In applied contexts, derivatives play a crucial role in optimization, sensitivity analysis, and the numerical solution of differential equations. Traditionally, derivatives are computed either symbolically, which can be algebraically complex and computationally intensive, or numerically, using finite difference approximations, which are susceptible to truncation and rounding errors. Fike and Alonso [9] show how standard operators (addition, multiplication, division, composition, exp/log/trig, etc.) should be overloaded to preserve second-order accuracy, and contrast the approach with finite differences and complex-step methods—highlighting the absence of truncation error and strong numerical stability. Griewank and Walther [10] proposed the standard, mathematically rigorous introduction to algorithmic differentiation (AD). They develop AD from first principles—the chain rule applied to a program’s computational graph—and show how to obtain exact (to working precision) derivatives of functions defined by code. Imoto et al. [11] develop a rigorous bridge between hyper-dual arithmetic and matrix calculus for automatic differentiation. They introduce a faithful matrix representation of hyper-dual numbers and proves a fundamental theorem: the representation is an algebra homomorphism that preserves addition, multiplication, and composition with elementary functions.
To address these challenges, alternative algebraic structures such as dual numbers have been introduced. Dual numbers, defined by the relation ϵ 2 = 0 for a nilpotent element ϵ , provide a first-order Taylor expansion mechanism that enables exact computation of first derivatives through arithmetic overloading. While powerful, classical dual numbers are inherently limited to first-order differentiation and do not extend naturally to higher-order or mixed partial derivatives in multivariable settings. To overcome these limitations, we consider the algebra of hyper-dual numbers, a higher-order generalization of dual numbers capable of encoding second and higher-order derivative information. Hyper-dual numbers incorporate multiple nilpotent generators satisfying appropriate commutation and annihilation properties, thereby allowing for the propagation of both pure and mixed partial derivatives in a structured and exact manner. Building upon the well-known Complex-Step derivative technique (for first derivatives), Millwater et al. [12] construct a framework where second-order partial derivatives and Hessian components can be extracted accurately without subtractive cancellation errors. They achieve this by evaluating a target function on specially constructed dual-complex inputs and the algebra to isolate higher-order derivative terms. Peón-Escalante et al. [13] derive compact closed-form higher-order kinematic formulas via repeated dual-number evaluations and mixed seeding strategies, show how to assemble Jacobians/Hessians needed for sensitivity and synthesis, and present efficient algorithms that slot into existing analysis pipelines with minimal code changes. This paper develops a systematic framework to obtain higher-order kinematic quantities by using dual-number arithmetic.
In this paper, we develop a rigorous mathematical framework for computing higher-order derivatives using hyper-dual numbers. We begin by formally defining hyper-dual numbers and their algebraic properties, including their role in higher-order Taylor expansions. We then construct hyper-dual extensions of real-valued functions and show how standard calculus rules—such as the chain rule and product rule—naturally emerge within this algebraic setting. The main contributions of this paper are as follows: In Section 2, we provide a detailed algebraic construction of hyper-dual numbers suitable for encoding second- and higher-order derivative information. In Section 3, we establish the consistency of hyper-dual arithmetic with classical differential calculus, including partial and mixed derivative formulations. In Section 4, we demonstrate that this approach enables exact and symbol-free computation of derivatives for smooth functions, offering theoretical advantages over both finite difference methods and symbolic differentiation.
This paper situates hyper-dual number calculus within a broader mathematical context, bridging ideas from differential algebra, automatic differentiation, and multivariable calculus. The proposed framework not only clarifies the theoretical underpinnings of hyper-dual differentiation but also lays the foundation for further developments in algorithmic differentiation and functional analysis.

2. Complex-Step Approximation

Generalized complex numbers are algebraic extensions of real numbers that introduce an imaginary unit whose square can equal any real number ( α ). They unify various algebraic and geometric ideas, improving mathematical analysis across multiple science and engineering fields. A generalized complex number extends the classical complex number system. It is usually defined by introducing a new imaginary unit j that satisfies: j 2 = α , where α R . Thus, generalized complex numbers (also called hypercomplex numbers or split-complex numbers, depending on α ) are represented in the following form:
z = a + b j , a , b R , j 2 = α .
Depending on the choice of α , generalized complex numbers yield different algebraic structures: Complex numbers ( α = 1 ) have the standard imaginary unit i 2 = 1 . Dual numbers ( α = 1 ) consist of a nilpotent unit ε 2 = 0 . Split-complex (Hyperbolic) numbers ( α = + 1 ) have a unit with hyperbolic geometry j 2 = 1 . In 1848, Cockle [14] introduced the term Tessarines, generalized numbers with imaginary units whose squares may differ from 1 . Clifford [1] introduced algebras now called Clifford algebras, which generalize complex numbers, quaternions, and include dual and split-complex numbers. Gentili et al. [15] and others [16,17] explored various hypercomplex systems, including dual numbers and split-complex numbers. Generalized complex numbers are applied in geometry, relativity, quantum theory, optimization, and signal processing. Thus, generalized complex numbers originated from a desire to generalize algebraic structures and facilitate applications in mathematics and physics.
Calculating derivatives is an essential part of mathematical analysis and numerical methods, and various approaches exist to accomplish this, each with its own advantages and disadvantages. The finite-difference method is one of the most common techniques for numerical differentiation. It estimates the derivative of a function using values at discrete points. Specifically, it uses the difference between function values at these points divided by the change in the input variable. The finite-difference method can be further divided into forward, backward, and central differences, each offering different levels of accuracy. Derived from the Taylor Series:
f ( x + h ) = f ( x ) + h f ( x ) + h 2 2 f ( x ) + h 3 6 f ( x ) + .
While this method is relatively simple to implement and understand, it may encounter issues with truncation errors and requires careful selection of the step size to balance accuracy and numerical stability. Forward Difference—First-Order Approximation
f ( x ) = f ( x + h ) f ( x ) h + O ( h ) .
Central Difference—Second-Order Approximation
f ( x ) = f ( x + h ) f ( x h ) 2 h + O ( h 2 ) .
Also, the Complex-Step Approximation technique provides a more precise alternative to traditional finite-difference methods by applying a small imaginary perturbation to the input variable. Taylor Series with an imaginary step
f ( x + h i ) = f ( x ) + h f ( x ) i h 2 2 f ( x ) h 3 6 f ( x ) i + .
f ( x + h i ) = f ( x ) h 2 2 f ( x ) + O ( h 4 ) + i h f ( x ) h 3 6 f ( x ) + O ( h 4 ) .
First-Derivative Approximation
f ( x ) = I m [ f ( x + h i ) ] h + O ( h 2 )
The derivative is obtained from the real part of the function evaluated at this complex point. A key advantage of the Complex-Step method is that it offers much higher accuracy without the subtraction errors commonly seen in finite differences. It is also computationally efficient for problems where function evaluations are not overly costly. However, its implementation can be limited by the nature of the function being differentiated, especially if it is not easily computed for complex numbers.
The Complex-Step Approximation is an effective numerical method that provides several key benefits, especially for evaluating derivatives. One major advantage of this approach is its high resistance to subtractive cancellation error, a common problem in traditional numerical differentiation methods. This makes Complex-Step particularly useful for achieving accurate derivative estimates. One of the main features of the Complex-Step Approximation is its ability to use an arbitrarily small step size while still accurately computing first derivatives. This removes the usual difficulties in choosing an appropriate step size, which often requires heuristic adjustments to balance precision and stability. By using the complex extension of the input variable, the approximation can stay accurate without the risk of losing significant digits through subtractive cancellation. Apart from its high numerical accuracy, implementing the Complex-Step Approximation is quite easy. The mathematical basis is simple enough for practitioners to quickly apply it in various situations, making it a useful tool for those working in numerical analysis. However, an important question arises when considering higher-order derivative calculations, such as second derivatives. The question of whether the Complex-Step Approximation maintains its advantageous properties when used for second-derivative evaluations is critical for users looking to expand its application. Further investigation into this will determine whether the approximation continues to provide the same level of accuracy and resistance to numerical errors or if additional factors need to be considered.
The Complex-Step method is a useful numerical technique for approximating derivatives of functions. When examining the second derivative of a function f ( x ) ,
f ( x ) = 2 ( f ( x ) R e [ f ( x + h i ) ] ) h 2 + O ( h 2 )
this approach can be used to create more precise formulas. This method estimates the second derivative using two different complex steps instead of just a standard finite difference approach. It involves adding a small imaginary part to the input variable, which results in more accurate derivative calculations. However, despite this improvement, it is important to recognize that the derived formulas can still be affected by subtractive cancellation errors. This type of error happens when two close numerical values are subtracted, leading to a significant loss of precision in the result. Therefore, while the Complex-Step method reduces some numerical issues, practitioners must remain aware of the inherent risks of numerical instability in their calculations.
In numerical differentiation, the Complex Step Approximation effectively reduces the impact of subtractive cancellation errors. It does this by using the imaginary part of a complex number, where the first derivative becomes the dominant term. As a result, this method enables us to directly extract the derivative without relying on a difference quotient, which may be susceptible to significant errors due to cancellation. To elaborate on this method, it is suggested that in certain applications, the second derivative should be the primary term in the non-real part of a complex function. This means that by combining real and imaginary components in a structured way, we can achieve more accurate representations of derivatives. To thoroughly analyze and accurately model physical phenomena, it is often essential to obtain both the first and second derivatives of the relevant functions. These derivatives provide crucial insights into the behavior of a system. To effectively differentiate using the Complex Step Approximation, it is essential that each derivative—both first and second—serves as the leading term in its respective calculation. This guarantees that the primary contribution of interest is accurately reflected in the results. The suggested methodology emphasizes the use of a complex number framework, which enables the handling of multiple non-real components. This approach not only improves precision but also expands the scope of analysis in numerical methods by incorporating various aspects of the data involved.
In the context of multi-variable functions, calculating cross derivatives depends significantly on previously established results. This means that the accuracy of cross derivatives is directly tied to the precision of the initial calculations. As the complexity of the function increases, this can lead to compounded errors.
2 f ( x , y ) x y 1 2 2 ( f ( x , y ) R e [ f ( x + h i , y + h i ) ] ) h 2 2 f ( x , y ) x 2 2 f ( x , y ) y 2
It is important to understand that errors in these calculations can accumulate. As each variable is modified and reevaluated, any inaccuracies may propagate through subsequent operations, negatively impacting the overall reliability of the results. To reduce the impact of cumulative errors in cross-derivative calculations, it is advisable to use perturbation techniques. Specifically, perturbations should be introduced for each variable independently, employing distinct non-real components. This method helps to isolate and minimize the influence of errors related to any specific variable. Ultimately, this method highlights the importance of incorporating various non-real components during the perturbation process. This approach can improve the robustness of our calculations and yield more accurate results when analyzing complex multi-variable functions.
The adjoint method is commonly used in optimization and when derivatives of functionals are needed. The adjoint method takes advantage of the structure of specific mathematical problems, especially those in fluid dynamics, structural optimization, and related fields. It involves solving a system of equations backward in time or space, enabling efficient calculation of derivatives with respect to multiple parameters simultaneously. The main benefit of the adjoint method is its efficiency in scenarios with many outputs linked to various inputs, which greatly lowers computational costs. However, it also requires a deeper understanding of the underlying mathematical structures and can be complex to implement for different problem types. Each of these methods balances accuracy, ease of implementation, and computational efficiency, making the choice of method depend on the specific requirements and constraints of the problem. Careful consideration of these factors is essential for obtaining reliable and efficient derivative calculations across different application areas.

3. Second Derivative Approximation in Quaternions

3.1. Second Derivative Approximation of f : R H

A quaternion-valued function f : R H can be expressed as follows:
f ( t ) = f 0 ( t ) + f 1 ( t ) i + f 2 ( t ) j + f 3 ( t ) k , f r : R R , ( r = 0 , 1 , 2 , 3 ) .
The first derivative with respect to t is defined component-wise as follows:
f ( t ) = f 0 ( t ) + f 1 ( t ) i + f 2 ( t ) j + f 3 ( t ) k ;
Similarly, the second derivative is
f ( t ) = f 0 ( t ) + f 1 ( t ) i + f 2 ( t ) j + f 3 ( t ) k ;
For a smooth quaternion-valued function f ( t ) expanded around point a, the Taylor series expansion is defined analogously to real-valued functions as follows:
f ( t ) = f ( a ) + ( t a ) f ( a ) + ( t a ) 2 2 ! f ( a ) + ( t a ) 3 3 ! f ( a ) + ,
The operations involve scalar multiplication, and derivatives are taken component-wise. Quaternionic multiplication is associative but not commutative; thus, the order of factors is essential when multiplying quaternionic expressions.
To approximate the second derivative f ( a ) in terms of quaternionic values, start from the general Taylor expansion around a:
f ( a + h ) = f ( a ) + h f ( a ) + h 2 2 ! f ( a ) + O ( h 3 )
and similarly,
f ( a h ) = f ( a ) h f ( a ) + h 2 2 ! f ( a ) + O ( h 3 )
We add these two equations together to eliminate the odd terms (first derivative):
f ( a + h ) + f ( a h ) = 2 f ( a ) + h 2 f ( a ) + O ( h 4 ) .
Rearranging terms, we obtain an approximation formula for the second derivative:
f ( a ) f ( a + h ) + f ( a h ) 2 f ( a ) h 2
The following text describes the central difference formula for second-order derivatives in quaternion-valued functions. Since quaternion multiplication is noncommutative, special care must be taken when handling quaternionic expressions. Derivatives are defined for each component, meaning that each component is a real number, and their differentiation follows the standard rules of real analysis. However, when multiplying by quaternion-valued coefficients, it is important to pay close attention to the order of multiplication:
f ( t ) = d 2 f ( t ) d t 2 = lim h 0 f ( t + h ) + f ( t h ) 2 f ( t ) h 2 .
This definition remains valid and consistent because it involves differences and scalar division, both of which are commutative operations with respect to scalar factors. Numerical analysis focuses on the development of methods to approximate solutions for quaternionic differential equations. This encompasses the study of rotation curves and the evaluation of quaternion-valued trajectories, which are crucial in various applications involving three-dimensional rotations. In robotics, precise control over orientation is essential. This involves providing highly accurate second-order approximations for attitude control, ensuring that a robotic system maintains its desired orientation. Techniques like Spherical Linear Interpolation (SLERP) are used to smoothly interpolate between rotations, enabling fluid and natural motion in robotic applications. Physics and mechanics involve the study of phenomena related to rotational dynamics and kinematics. Here, researchers delve into the fundamental principles underlying the motion of objects in rotation, analyzing forces and torques to understand complex systems and predict their behavior under various conditions.

3.2. Subject to Subtractive Cancellation Error of Second Derivative Approximation in Quaternions

Subtractive cancellation is a numerical phenomenon that occurs when two nearly equal quantities are subtracted from one another, resulting in a substantial loss of precision due to rounding errors. This challenge is especially evident in the context of numerical differentiation, where the goal is to approximate derivatives of functions. The issue becomes particularly critical when approximating the second derivative of a function. In the case of quaternion-valued functions f : R H , the process of estimating the second derivative, denoted as
f ( a ) f ( a + h ) + f ( a h ) 2 f ( a ) h 2 ,
relies on the use of a small step size h. As this step size approaches zero, the susceptibility to subtractive cancellation errors increases significantly. This means that the slight differences between consecutive values can lead to large discrepancies in the computed second derivative, thereby compromising the accuracy of the approximation. This sensitivity to such numerical errors underlines the importance of careful consideration when performing calculations in this domain.
When the value of h approaches zero, the differences f ( a + h ) f ( a ) and f ( a ) f ( a h ) converge closely to one another. This subtle behavior becomes critical when analyzing quaternion components, as each component calculation relies on floating-point operations that inherently have limitations regarding arithmetic precision. When we subtract nearly identical numbers, the effect of rounding errors becomes significantly magnified for each component of the quaternion. This phenomenon, known as subtractive cancellation, introduces complications similar to those encountered in scalar mathematics, now impacting all four components of the quaternion simultaneously. Consequently, when we attempt to compute the quaternionic second derivative using the following formula:
f ( a ) { f 0 ( a + h ) + f 0 ( a h ) 2 f 0 ( a ) } + { f 1 ( a + h ) + f 1 ( a h ) 2 f 1 ( a ) } i + h 2 .
We find that the accuracy of this approximation is heavily compromised due to these rounding errors, making it challenging to achieve precise results in our calculations.
Upon conducting the subtraction, we notice that the linear terms of the function, represented as h f ( a ) , completely cancel each other out. This simplification allows us to focus on the next tier of terms, which include higher-order contributions such as h 3 , h 4 , and so on. These terms diminish rapidly in size as h becomes smaller, ultimately approaching the limits of machine precision, or machine epsilon, in the context of finite-precision arithmetic. As a result, we aim to evaluate the expression numerically:
f ( a + h ) + f ( a h ) 2 f ( a ) h 2 f ( a ) + R ( a ) ,
In this equation, R ( a ) signifies the negligible rounding errors that arise during computations. It is important to note that since we are dividing by h 2 , which is a small quantity, these slight inaccuracies from rounding errors become significantly magnified. This amplification can lead to substantial impacts on the overall numerical accuracy of our results. This highlights the delicate balance that must be maintained when performing numerical computations, especially when dealing with small perturbations.

3.3. Second Derivative Approximation of in Quaternion

The second derivative of the function f ( x ) can be expressed using the Taylor series expansion, a powerful mathematical tool for approximating functions. This expansion is based on the principles of the quaternion basis, which involves a set of numbers that extend the real numbers and complex numbers:
f ( x + h 1 i + h 2 j + 0 k ) = f ( x ) + h 1 f ( x ) i + h 2 f ( x ) j h 1 2 + h 2 2 2 f ( x ) + .
In deriving the second derivative, we utilize the complex step derivative method.
f ( x ) = 2 f ( x ) R e [ f ( x + h 1 i + h 2 j + 0 k ) ] h 1 2 + h 2 2 + O ( h 2 )
This method is advantageous because it allows us to compute derivatives with enhanced numerical stability and accuracy by incorporating complex numbers into the differentiation process. By applying these techniques, we can obtain a more detailed and precise representation of the function’s behavior around a specific point. This approach not only simplifies the calculation of the second derivative but also provides deeper insights into the function’s characteristics and curvature.
Second derivative approximations in quaternions can encounter significant numerical issues, particularly subtractive cancellation errors. This challenge is akin to what is observed with real-valued functions, but may be exacerbated in quaternion computations due to the multiple components involved in quaternions. To address these errors, it is essential to implement strategies that enhance the accuracy of the numerical methods used. One effective approach is to select an optimal step size for the differentiation process. The choice of step size has a direct impact on the stability and precision of derivative approximations, underscoring the importance of careful numerical analysis.
Additionally, employing higher-order numerical differentiation methods can provide improved accuracy by taking more information into account when approximating derivatives. These methods can reduce the error associated with finite differences, particularly in cases where lower-order methods exhibit significant loss of precision.
Furthermore, leveraging automatic differentiation techniques can be highly beneficial if the mathematical framework allows for it. Automatic differentiation systematically computes derivatives with respect to input variables, ensuring high precision without the pitfalls of numerical approximation.
By implementing these strategies, one can ensure that quaternionic derivative computations remain accurate and stable, which is crucial for their application in various fields such as robotics, simulation, animation, and other areas where precise mathematical modeling is essential. Careful numerical treatment is imperative for reliable results in these practical applications.

4. Implementation of Hyper-Dual Numbers and Their Calculations

We introduce a new way of defining quaternions by relating it to the properties of the basis of Dual numbers. Now, let us consider a different type of number as follows, namely, a Hyper-dual number x H d such that
H d : = { x = x 0 + x 1 ϵ 1 + x 2 ϵ 2 + x 3 ϵ 1 ϵ 2 | x r R , r = 0 , 1 , 2 , 3 }
satisfying
ϵ 1 2 = ϵ 2 2 = ( ϵ 1 ϵ 2 ) 2 = 0 , with ϵ 1 ϵ 2 ϵ 1 ϵ 2 0 .
We define the properties of the basis that constitute the numbers. To eliminate the inconvenience caused by the non-commutability of the multiplication of quaternions, the multiplication of the basis for the new number system is intentionally set to be commutative:
( ϵ 1 ϵ 2 ) 2 = ϵ 1 2 ϵ 2 2 , ( ϵ 1 ϵ 2 ) 2 = ϵ 1 2 ϵ 2 2 .
This setting creates a constraint on the possible values of ϵ 1 and ϵ 2 . We give arithmetic operations. Consider two hyper-dual numbers:
x = x 0 + x 1 ϵ 1 + x 2 ϵ 2 + x 3 ϵ 1 ϵ 2 , y = y 0 + y 1 ϵ 1 + y 2 ϵ 2 + y 3 ϵ 1 ϵ 2 .
Here, x is the function value, x 1 , x 2 represent the first derivatives with respect to two independent variables, and x 3 represents the mixed second-order partial derivative.
Addition is defined as
x + y = x 0 + y 0 + ( x 1 + y 1 ) ϵ 1 + ( x 2 + y 2 ) ϵ 2 + ( x 3 + y 3 ) ϵ 1 ϵ 2 .
Multiplication is defined as
x + y = x 0 y 0 + ( x 0 y 1 + x 1 y 0 ) ϵ 1 + ( x 0 y 2 + x 2 y 0 ) ϵ 2 + ( x 0 y 3 + x 1 y 2 + x 2 y 1 + x 3 y 0 ) ϵ 1 ϵ 2 .
The inverse is defined as
1 x = 1 x 0 x 1 x 0 2 ϵ 1 x 2 x 0 2 ϵ 2 x 0 x 3 2 x 1 x 2 x 0 3 ϵ 1 ϵ 2 ,
only exists for x 0 0 . This provides a definition for the norm denoted as N ( x ) = x 0 2 . It indicates that when making comparisons between values, we should focus exclusively on the real component. This means that the relationship x > y is equivalent to the comparison of their respective real parts, expressed as x 0 > y 0 . By adhering to this approach, we can ensure that the code behaves consistently and follows the same execution path as any code that processes real-valued data, thereby maintaining predictable and accurate outcomes in our computations.

Derivative Calculations on Hyper-Dual Numbers

This approach allows for the representation of functions in terms of their derivatives at a specific point, effectively capturing the behavior of the function around that point. Hyper-dual numbers, which extend the concept of dual numbers by introducing two infinitesimal components, enable a comprehensive analysis of higher-order derivatives. As such, the Taylor Series not only facilitates the approximation of differentiable functions but also enhances our understanding of their intricacies by incorporating the properties of hyper-dual numbers into the expansion process.
Theorem 1
(Exactness of First and Mixed Derivatives via Hyper-Dual Numbers). Let f : Ω H d be a function of class C 2 , and let
x = x 0 + x 1 ϵ 1 + x 2 ϵ 2 + x 3 ϵ 1 ϵ 2 H d ,
where H d denotes the set of second-order hyper-dual numbers. Then, evaluating f at x = ( x 1 + ϵ 1 ) ( x 2 + ϵ 2 ) yields differentiable functions f that can be thoughtfully characterized through the Taylor Series expansion when utilizing a generic hyper-dual number, where Ω H d :
f ( x ) = f ( x 0 ) + x 1 f ( x 0 ) ϵ 1 + x 2 f ( x 0 ) ϵ 2 + ( x 3 f ( x 0 ) + x 1 x 2 f ( x 0 ) ) ϵ 1 ϵ 2 .
Proof. 
We use the second-order multivariate Taylor expansion:
f ( x 1 + h 1 , x 2 + h 2 ) = f ( x 1 , x 2 ) + h 1 f x 1 + h 2 f x 2 + h 1 h 2 2 f x 1 x 2 + O ( h 1 h 2 ) .
Let h 1 = ϵ 1 , h 2 = ϵ 2 , then ϵ 1 2 = ϵ 2 2 = 0 and O ( h 1 h 2 ) = 0 . Substitute the following into the expansion:
f ( x ) = f ( x 1 , x 2 ) + ϵ 1 f x 1 + ϵ 2 f x 2 + ϵ 1 ϵ 2 2 f x 1 x 2 .
This completes the proof. □
Proposition 1
(Algebraic Closure of Hyper-Dual Differentiation). Let f , g : R n R be C 2 smooth functions. Then the class of hyper-dual extended functions is closed under:
(1) 
Scalar multiplication: a f ( x ) ;
(2) 
Addition: f ( x ) + g ( x ) ;
(3) 
Multiplication: f ( x ) · g ( x ) ;
(4) 
Composition (under certain conditions): f ( g ( x ) ) .
For instance,
Example 1.
Let f ( x ) = x 3 . Then,
x 3 = x 0 3 + 3 x 1 x 0 2 ϵ 1 + 3 x 2 x 0 2 ϵ 2 + ( 3 x 3 x 0 2 + 6 x 1 x 2 x 0 ) ϵ 1 ϵ 2 .
We will explore the computation of derivatives for a function f ( x ) , where x is an n-dimensional vector in R n , expressed as x = ( x 1 , x 2 , , x n ) T . Our focus will be on calculating the mixed second partial derivative 2 f ( x ) x i x j using hyper-dual numbers, which allows us to obtain derivatives with high precision using a single evaluation of the function. To begin, we define a perturbed vector x i j as follows:
x i j = x + h 1 ϵ 1 e i + h 2 ϵ 2 e j + ϵ 1 ϵ 2 0 ,
where ϵ 1 and ϵ 2 are infinitesimally small hyper-dual numbers, h 1 and h 2 are finite perturbations, and e i and e j are the standard basis vectors in R n . The term 0 effectively introduces a second-order mixed perturbation into the vector.
Utilizing this formulation, we can express the function evaluated at the perturbed vector f ( x i j ) as follows:
f ( x i j ) = f ( x ) + h 1 f ( x ) x i ϵ 1 + h 2 f ( x ) x j ϵ 2 + 2 f ( x ) x i x j ϵ 1 ϵ 2 .
This equation provides us with a comprehensive means to gather essential derivative information through a single evaluation of the function.
Specifically, from this single evaluation, we can extract the first derivatives:
f ( x ) x i , f ( x ) x j ,
and the mixed second partial derivative:
2 f ( x ) x i x j .
As an illustrative example, consider the function f ( x ) = sin x . The evaluation of this function at a point can be expressed in terms of its Taylor expansion around a point x 0 . Specifically, we can write the following:
f ( x ) = sin x 0 + cos x 0 ϵ 1 + ϵ 2 cos x 0 + 2 f x 2 | x 0 ϵ 1 ϵ 2 ,
where we take into account the values of the first and second derivatives of sin x evaluated at x 0 . The first derivative cos x 0 is associated with the infinitesimal perturbation ϵ 1 , while the second derivative sin x 0 becomes relevant in the second-order term associated with ϵ 1 ϵ 2 .
Through this detailed analysis, we can see how hyper-dual numbers enable efficient calculations of derivatives, offering a powerful tool for both theoretical analysis and practical applications in various fields of mathematics and engineering.
In particular,
Example 2.
Let f ( x ) = sin 3 x . This function can be evaluated as follows:
t 0 = x + h 1 ϵ 1 + h 2 ϵ 2 + 0 ϵ 1 ϵ 2 ,
t 1 = sin t 0 = sin x + h 1 cos x ϵ 1 + h 2 cos x ϵ 2 h 1 h 2 sin x ϵ 1 ϵ 2 ,
t 2 = t 1 3 = sin 3 x + 3 h 1 cos x sin 2 x ϵ 1 + 3 h 2 cos x sin 2 x ϵ 2 3 4 h 1 h 2 ( sin x 3 sin 3 x ) ϵ 1 ϵ 2 ,
We begin by formally defining the algebraic structure of hyper-dual numbers, which generalize classical dual numbers by introducing multiple nilpotent elements to encode higher-order derivative information. For d = h 1 ϵ 1 + h 2 ϵ 2 + 0 ϵ 1 ϵ 2 , the Taylor Series becomes
f ( x + d ) = f ( x ) + h 1 f ( x ) ϵ 1 + h 2 f ( x ) ϵ 2 + h 1 h 2 f ( x ) ϵ 1 ϵ 2 .
This expression is exact, with no truncation error.

5. Conclusions

In this paper, we have developed a rigorous and unified framework for computing higher-order derivatives using hyper-dual numbers, extending the classical dual number system. By formalizing the algebraic structure of hyper-dual numbers and demonstrating their compatibility with multivariate Taylor expansions, we established a method for exact evaluation of first and mixed second-order derivatives. The definition and analysis of hyper-dual numbers as algebraic tools for automatic differentiation; exact derivative computation without symbolic manipulation or finite difference approximations; theoretical validation of correctness through algebraic closure, Taylor expansions, and chain rule propagation; application potential in numerical optimization, sensitivity analysis, and machine learning. In contrast to traditional numerical differentiation techniques, the hyper-dual number approach provides machine-accurate derivative information in a stable and efficient manner, particularly suitable for algorithmic implementation in modern computational environments.
Building on the results presented here, several directions for further research and development are evident: the extension of the framework to compute third- or higher-order partial derivatives by introducing additional nilpotent components and formalizing the corresponding hyper-dual algebra; adapting the hyper-dual number methodology for differentiating vector- and matrix-valued functions f : R n R m , including Jacobian and Hessian tensors; embedding hyper-dual arithmetic into automatic differentiation libraries for broad accessibility and practical deployment in scientific computing software; investigating applications in differential geometry, lie groups, and continuum mechanics, where higher-order derivatives play a critical role in curvature, stress, and deformation analysis; applying hyper-dual differentiation to discretized partial differential equations to enable derivative-aware solvers in finite element and spectral methods; exploring deeper algebraic properties such as isomorphism classes, module structures, and potential links to Grassmann or exterior algebras. By continuing to develop and apply hyper-dual number methods, we expect to contribute to a broader class of exact, stable, and efficient derivative computation tools in both pure and applied mathematics.

Funding

This study was supported by Dongguk University Research Fund and the National Research Foundation of Korea (NRF) (2021R1F1A1063356).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Clifford, W.K. Preliminary sketch of biquaternions. Proc. Lond. Math. Soc. 1873, s1-4, 381–395. [Google Scholar] [CrossRef]
  2. Kim, J.E.; Shon, K.H. The Regularity of functions on Dual split quaternions in Clifford analysis. Abst. Appl. Anal. 2014, 2014, 369430. [Google Scholar] [CrossRef]
  3. Kim, J.E.; Lim, S.J.; Shon, K.H. Taylor series of functions with values in dual quaternion. Theor. Math. Pedagog. Math. 2013, 20, 251–258. [Google Scholar] [CrossRef]
  4. Rall, L.B. (Ed.) Automatic Differentiation: Techniques and Applications; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  5. Squire, W.; Trapp, G. Using complex variables to estimate derivatives of real functions. SIAM Rev. 1998, 40, 110–112. [Google Scholar] [CrossRef]
  6. Walther, A. Automatic differentiation for higher order derivatives in scientific computing and simulation. Computing 2007, 79, 135–160. [Google Scholar]
  7. Hamilton, W.R. Elements of Quaternions; Longmans, Green & Co.: London, UK, 1866. [Google Scholar]
  8. Szirmay-Kalos, L. Higher order automatic differentiation with dual numbers. Period. Polytech. Electr. Eng. Comput. Sci. 2021, 65, 1–10. [Google Scholar] [CrossRef]
  9. Fike, J.A.; Alonso, J.J. The development of hyper-dual numbers for exact second-derivative calculations. In Proceedings of the 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, Orlando, FL, USA, 4–7 January 2011. AIAA Paper 2011-886. [Google Scholar]
  10. Griewank, A.; Walther, A. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, 2nd ed.; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
  11. Imoto, Y.; Yamanaka, N.; Uramoto, T.; Tanaka, M.; Fujikawa, M.; Mitsume, N. Fundamental theorem of matrix representations of hyper-dual numbers for computing higher-order derivatives. JSIAM Lett. 2020, 12, 29–32. [Google Scholar] [CrossRef]
  12. Millwater, H.; Balcer, M.; Millwater, N. Calculation of machine precision second order derivatives using dual-complex numbers. Numer. Algor. 2025, 99, 1925–1941. [Google Scholar] [CrossRef]
  13. Peón-Escalante, R.; Espinosa-Romero, A.; Peñuñuri, F. Higher order kinematic formulas and its numerical computation employing dual numbers. Mech. Based Des. Struct. Mach. 2024, 52, 3511–3526. [Google Scholar] [CrossRef]
  14. Cockle, J., III. On a new imaginary in algebra. Lond. Edinb. Dubl. Phil. Mag. J. Sci. 1849, 34, 37–47. [Google Scholar]
  15. Gentili, G.; Stoppato, C.; Struppa, D.C.; Vlacci, F. Recent developments for regular functions of a hypercomplex variable. In Hypercomplex Analysis; Birkhäuser: Basel, Switzerland, 2009; pp. 165–185. [Google Scholar]
  16. Rehner, P.; Bauer, G. Application of generalized (hyper-) dual numbers in equation of state modeling. Front. Chem. Eng. 2021, 3, 758090. [Google Scholar] [CrossRef]
  17. Tanaka, M.; Balzani, D.; Schröder, J. Implementation of incremental variational formulations based on the numerical calculation of derivatives using hyper dual numbers. Comput. Methods Appl. Mech. Eng. 2016, 301, 216–241. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J.E. A Hyper-Dual Number Approach to Higher-Order Derivative Computation. Axioms 2025, 14, 641. https://doi.org/10.3390/axioms14080641

AMA Style

Kim JE. A Hyper-Dual Number Approach to Higher-Order Derivative Computation. Axioms. 2025; 14(8):641. https://doi.org/10.3390/axioms14080641

Chicago/Turabian Style

Kim, Ji Eun. 2025. "A Hyper-Dual Number Approach to Higher-Order Derivative Computation" Axioms 14, no. 8: 641. https://doi.org/10.3390/axioms14080641

APA Style

Kim, J. E. (2025). A Hyper-Dual Number Approach to Higher-Order Derivative Computation. Axioms, 14(8), 641. https://doi.org/10.3390/axioms14080641

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop