Next Article in Journal
New Results on Graph Matching from Degree-Preserving Growth
Previous Article in Journal
Aperiodic Optimal Chronotherapy in Simple Compartment Tumour Growth Models Under Circadian Drug Toxicity Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Multiplicative Calculus-Based Iterative Scheme for Nonlinear Engineering Applications

by
Mudassir Shams
1,2,
Nasreen Kausar
3,* and
Ioana Alexandra Șomîtcă
4,*
1
Faculty of Engineering, Free University of Bozen-Bolzano, 39100 Bolzano, Italy
2
Department of Mathematics and Statistics, Riphah International University I-14, Islamabad 44000, Pakistan
3
Department of Mathematics, Faculty of Arts and Science, Yildiz Technical University, Istanbul 34220, Türkiye
4
Department of Mathematics, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, Baritiu Street, no 26–28, 40027 Cluj-Napoca, Romania
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(22), 3517; https://doi.org/10.3390/math12223517
Submission received: 2 October 2024 / Revised: 4 November 2024 / Accepted: 6 November 2024 / Published: 11 November 2024
(This article belongs to the Special Issue Mathematical and Numerical Methods in Biology and Engineering)

Abstract

:
It is essential to solve nonlinear equations in engineering, where accuracy and precision are critical. In this paper, a novel family of iterative methods for finding the simple roots of nonlinear equations based on multiplicative calculus is introduced. Based on theoretical research, a novel family of simple root-finding schemes based on multiplicative calculus has been devised, with a convergence order of seven. The symmetry in the pie graph of the convergence–divergence areas demonstrates that the method is stable and consistent when dealing with nonlinear engineering problems. An extensive examination of the numerical results of the engineering applications is presented in order to assess the effectiveness, stability, and consistency of the recently established method in comparison to current methods. The analysis includes the total number of functions and derivative evaluations per iteration, elapsed time, residual errors, local computational order of convergence, and error graphs, which demonstrate our method’s better convergence behavior when compared to other approaches.

1. Introduction

The application of nonlinear equations in science and engineering dates back to the 18th and 19th centuries, when they were employed to study fluid dynamics and celestial mechanics [1,2,3,4]. Complex phenomena like solitons in systems with nonlinear equations
f ( x ) = 0 ,
as outputs, bifurcations, and chaos are explained using these formulas [5].
Fundamentally nonlinear differential equations [6,7,8] are a superior representation for many physical processes involving memory effects, anomalous diffusion, or hereditary aspects. When typical integer-order models fail, fractional-order models improve the flexibility and precision of modeling systems. While exact solutions to these equations are computationally enticing due to their precision and lack of approximation, the inherent difficulties of fractional-order systems typically prevent such responses. In these situations, analytical tools like series expansions and integral transformations are critical. These methods produce closed-form or semi-closed-form solutions that not only reveal useful information about system behavior but also serve as reference points for validating and guiding numerical solutions. They do this to ensure that numerical approaches remain accurate and dependable when exact solutions are not possible.
Multiplicative calculus [9], developed in the twentieth century, builds on classical calculus by focusing on growth rates in multiplicative terms rather than differences. It works especially well to explain scaling phenomena [10], geometric progressions [11,12,13], and exponential growth—all of which are less intuitive concepts in traditional calculus. Its main value is that it provides a natural framework for a variety of evolutionary processes, including population increase, financial returns, and fractal structures. Multiplicative derivatives and integrals are used in this calculus to more correctly and succinctly explain some real-world occurrences, where ratios and proportionality are more important than differences.
Definition 1.
A function is said to be a multiplicative differential function if ς : ϖ R R
ς [ ] x = d [ ] ς d x = lim h 0 ς x + h ς x 1 h ,
if f > 0 and the derivative of the f at x exists, then the nth-multiplicative derivative exists and is defined as [14]
ς [ ] x = e ln ς x ,
where ln ς = ln ς x . The higher-order derivative is defined similarly as follows:
ς [ ] x = e ln ς x
and in more general form is
ς [ ] n x = e ln ς n x , n = 0 , 1 ,
where n = 0 , no multiplicative derivative exist there and represents the original function ς ( x ) = 1 .
Definition 2.
Suppose ς : ϖ R R + is a positive nonlinear function; then, the multiplicative nonlinear equation [15] is defined as
ς ( x ) = 1 .

Some Multiplicative Differentiation Results

For a multiplicative differentiable function t, ς , and ψ to be ordinary differentiable functions with a positive constant c, we have
c [ ] = 1 ,
c t [ ] ( x ) = t [ ] ( x ) ,
t ς [ ] ( x ) = t [ ] ( x ) ς [ ] ( x ) ,
t ς [ ] ( x ) = t [ ] ( x ) ς [ ] ( x ) ,
t ψ [ ] ( x ) = t [ ] ( x ) ψ x t x ψ x ,
t ψ [ ] ( x ) = t [ ] ψ x ψ x .
The multiplicative Taylor theorem [16] is defined in the following theorems, which are used in the construction of the new numerical scheme for solving nonlinear problem (1).
Theorem 1.
Let : ϖ R be a n + 1 -times multiplicative differential in an open interval ϖ; then, for any x , x + a ϖ , ∃ a number η 0 , 1 such that
ς x + a = Π n l = 0 ς [ ] l x a l 1 ! ς [ ] n + 1 x + η a a n + 1 n + 1 ! .
The rest of the paper is structured as follows: Section 2 discusses the development and convergence analysis of a multiplicative-order simple iterative method for solving Equation (1). Section 3 evaluates the effectiveness and stability of the proposed method through numerical examples and comparisons with other techniques. Section 4 summarizes all of our numerical findings, emphasizing the importance of the study work’s novelty and significance in science and engineering. Finally, the conclusions are presented in Section 5.

2. Construction and Dynamic Analysis of Fractional Iterative Scheme

The Newton method [17,18,19,20] refines an initial guess for solving a nonlinear problem by using a sequence of linear approximations. This technique, known for its rapid convergence, often quadratic, has been widely used since its development in the late 14th century to solve nonlinear equations. Using multiplicative analysis, the multiplicative Newton theorem [21] is written as
ς x = ς ( x [ h ] ) x ( h ) x ς [ ] ( s ) d s = ς ( x [ h ] ) e x [ h ] x ln ς s d s .
Using Newton Cotes’ quadrature [22] of order 0-degree for (8) is written as
x [ h ] x ς [ ] ( s ) d s = e x [ h ] x ln ς s d s = e x x [ h ] ln ς ( x [ h ] ) = ς [ ] ( x [ h ] ) x x [ h ] .
The multiplicative Newton method is an extension of the traditional Newton method obtained for ς x = 1 as
v [ h ] = x [ h ] ln ς ( x [ h ] ) ln ς [ ] ( x [ h ] ) .
The multiplicative Newton technique retains the same convergence order as the classical method.
In multiplicative calculus, numerical schemes make it easier to treat scale-invariant processes, whereas classical calculus sometimes struggles with complicated transformations. Further, multiplicative techniques avoid concerns such as sum divergence, which improves performance in circumstances with large-scale variations. They also improve the precision of error estimation in systems based on ratios rather than absolute values [23]. Multiplicative relationships are especially important in fields such as biology, economics, fractal analysis, and differential equations. Further, Singh et al. [24] proposed the multiplicative version of the Schröder method as
v [ h ] = x [ h ] ln ς ( x [ h ] ) ln ς [ ] ( x [ h ] ) ln ς [ ] ( x [ h ] ) 2 ln ς [ ] ( x [ h ] ) ln ς ( x [ h ] ) .
This method has a convergence order of 2. Similarly, Waseem et al. [25] presented the following iterative approach with quadratic convergence using multiplicative calculus:
v [ h ] = x [ h ] ln ς ( x [ h ] ) ln ς [ ] ( x [ h ] ) α ln ς ( x [ h ] ) .
Consider the following simple root-finding method ( SR 1 [ ] ) [26] for solving (1) as
v h = z h ς ( z [ h ] ) ς ( x [ h ] ) ς ( x [ h ] ) ς ( y [ h ] ) ς ( x [ h ] ) 2 ς ( y [ h ] ) 2 + ς ( z [ h ] ) ς ( y [ h ] ) α ς ( z [ h ] ) ,
where y [ h ] = x [ h ] ς ( x [ h ] ) ς ( x [ h ] ) ,   z h = y [ h ] ς ( y [ h ] ) ς ( x [ h ] ) ς ( y [ h ] ) ς ( x [ h ] ) 2 ς ( y [ h ] ) . We suggest the following multiplicative version of (13) the simple root-finding technique for obtaining simple roots of (6). We have
v h = z h ln ς ( z [ h ] ) ln ς [ ] ( x [ h ] ) ln ς ( x [ h ] ) ln ς ( y [ h ] ) ln ς ( x [ h ] ) 2 ln ς ( y [ h ] ) 2 + ln ς ( z [ h ] ) ln ς ( y [ h ] ) α ln ς ( z [ h ] ) ,
where y [ h ] = x [ h ] ln ς ( x [ h ] ) ln ς [ ] ( x [ h ] ) , z h = y [ h ] ln ς ( y [ h ] ) ln ς [ ] ( x [ h ] ) ln ς ( y [ h ] ) ln ς ( x [ h ] ) 2 ln ς ( y [ h ] ) . We abbreviate this method as SR 1 [ ] .

Convergence Analysis

For the iterative scheme given by (10), we establish the following theorem to determine its order of convergence.
Theorem 2.
Let
ς : R R
be a continuous function, with ln ς m [ ] ( x [ h ] ) of order m containing the exact root ξ of ς ( x [ h ] ) . Furthermore, given a sufficiently close starting value x [ 0 ] , the convergence order of the scheme
v h = z h ln ς ( z [ h ] ) ln ς [ ] ( x [ h ] ) ln ς ( x [ h ] ) ln ς ( y [ h ] ) ln ς ( x [ h ] ) 2 ln ς ( y [ h ] ) 2 + ln ς ( z [ h ] ) ln ς ( y [ h ] ) α ln ς ( z [ h ] )
is a minimum of 7, with the error equation given by
ϵ [ h ] = 4 2 6 8 2 4 3 + 4 2 2 3 2 ϵ [ h ] 7 + ¢ 57 [ ] ϵ [ h ] 8 ,
where ¢ 57 [ ] = α 2 7 + 3 α 2 5 3 + 70 2 7 3 2 3 α 3 2 299 2 5 3 + 2 α 3 3 + 79 2 4 4 + 401 2 3 3 2 4 2 3 5 196 2 2 3 4 152 2 3 3 13 2 2 6 + 44 2 3 5 + 12 2 4 2 + 63 3 2 4 10 3 6 .
Proof. 
Let ξ be a root of ς ( x ) = 1 and assume x [ h ] = ξ + ϵ [ h ] . Using multiplicative Taylor series expansion of ς ( x ) around x = ξ , we obtain
ς ( x [ h ] ) = ς ξ + ϵ [ h ] = ς ξ ς [ ] ζ ϵ [ h ] ς [ ] ξ ϵ [ h ] 2 2 ! ς [ ] ζ ϵ [ h ] 3 3 ! O ϵ l 4 .
Using the natural logs on both sides of (18), we obtain
ln ς ( x [ h ] ) = ln ς ( ξ ) + ln ς [ ] ( ξ ) ϵ [ h ] + ln ς [ ] ( ξ ) ϵ [ h ] 2 2 ! + ln ς [ ] ( ξ ) ϵ [ h ] 3 3 ! + O ϵ [ h ] 4 ,
ln ς ( x [ h ] ) = ln ς [ ] ( ξ ) ϵ [ h ] + 1 2 ! ln ς [ ] ( ζ ) ln ς [ ] ( ζ ) ϵ [ h ] 2 + 1 3 ! ln ς [ ] ( ζ ) ln ς [ ] ( ζ ) ϵ [ h ] 3 + O ϵ [ h ] 4 ,
ln ς ( x [ h ] ) = ln ς [ ] ( ζ ) ϵ [ h ] + 2 ϵ [ h ] 2 + 3 ϵ [ h ] 3 + O ϵ [ h ] 4 ,
where 2 = 1 j ! ln ς [ j ] ( ξ ) ln ς [ ] ( ξ ) ;   j 2 . Taking the derivative of (21), we have
ln ς [ ] ( x [ h ] ) = ln ς [ ] ( ξ ) + ln ς 2 [ ] ( ξ ) ϵ [ h ] + ln ς 3 [ ] ( ξ ) ϵ [ h ] 2 + O ϵ [ h ] 3 ,
ln ς [ ] ( x [ h ] ) = ln ς [ ] ( ξ ) 1 + 1 2 ! ln ς [ ] ( ξ ) ln ς [ ] ( ξ ) ϵ [ h ] + 1 3 ! ln ς [ ] ( ξ ) ln ς [ ] ( ξ ) ϵ [ h ] 2 + O ϵ [ h ] 3 ,
ln ς [ ] ( x [ h ] ) = ln ς [ ] ( ξ ) 1 + 2 2 ϵ [ h ] + 3 3 ϵ [ h ] 2 + O ϵ [ h ] 3 ,
ln ς x [ h ] = ς ξ ϵ [ h ] + 2 ϵ [ h ] 2 + 3 ϵ [ h ] 3 + 4 ϵ [ h ] 4 + 5 ϵ [ h ] 5 + 6 ϵ [ h ] 6 + ,
ln ς x [ h ] = ς ξ 1 + 2 2 ϵ [ h ] + 3 3 ϵ [ h ] 2 + 4 4 ϵ [ h ] 3 + 5 5 ϵ [ h ] 4 + 6 6 ϵ [ h ] 5 .
Taking inverse of (26), we obtain
1 ln ς x [ h ] = ς ξ 1 2 2 ϵ [ h ] + 4 2 2 3 3 ϵ [ h ] 2 + 6 2 3 4 4 + 2 4 2 2 + 3 3 2 ϵ [ h ] 3 ¢ 01 [ ] ϵ [ h ] 4 + ¢ 02 [ ] ϵ [ h ] 5 + . . . .
Multiplying (25) with (27), we have
ln ς x [ h ] ln ς [ ] x [ h ] = ϵ [ h ] 2 ϵ [ h ] 2 + 2 2 2 2 3 ϵ [ h ] 3 + ¢ 03 [ ] ϵ [ h ] 4 + ¢ 04 [ ] ϵ [ h ] 5 + ¢ 05 [ ] ϵ [ h ] 6 + . . .
y [ h ] : = 2 ϵ [ h ] 2 + 2 2 2 + 2 3 ϵ [ h ] 3 + 4 2 3 7 2 3 + 3 4 ϵ [ h ] 4 + ¢ 06 [ ] ϵ [ h ] 5 + ¢ 07 [ ] ϵ [ h ] 6 + ¢ 08 [ ] ϵ [ h ] 7 + . . .
where ¢ 01 [ ] = 8 2 4 5 5 + 3 4 2 2 + 3 3 3 + 2 8 2 3 12 2 3 + 4 4 2 ,
¢ 02 [ ] = 10 2 5 6 6 + 4 4 2 2 + 3 3 4 + 3 8 2 3 12 2 3 + 4 4 3 + 2 16 2 4 + 36 2 2 3 16 2 4 9 3 2 + 5 5 2 ,
¢ 03 [ ] = 4 2 3 + 7 2 3 3 4 ,
¢ 04 [ ] = 8 2 4 20 2 2 3 + 10 2 4 + 6 3 2 4 5 ,
¢ 05 [ ] = 16 2 5 + 52 2 3 3 28 2 2 4 33 2 3 2 + 13 2 5 + 17 3 4 5 6 ,
¢ 06 [ ] = 8 2 4 + 20 2 2 3 10 2 4 6 3 2 + 4 5 ,
¢ 07 [ ] = 16 2 5 52 2 3 3 + 28 2 2 4 + 33 2 3 2 13 2 5 17 3 4 + 5 6 ,
¢ 08 [ ] = 32 2 6 112 2 4 3 + 56 2 3 4 + 90 2 2 3 2 52 2 3 4 9 3 3 + 8 2 6 + 8 3 5 + 4 4 2 ,
ln ς y [ h ] = + 2 ϵ [ h ] 2 + ¢ 09 [ ] ϵ [ h ] 3 + ¢ 10 [ ] ϵ [ h ] 4 + ¢ 11 [ ] ϵ [ h ] 5 + . . .
ln ς x [ h ] 2 ς ln y [ h ] = ϵ [ h ] 2 ϵ [ h ] 2 + 4 2 2 3 3 ϵ [ h ] 3 + ¢ 12 [ ] ϵ [ h ] 4 + ¢ 13 [ ] ϵ [ h ] 5 + 6 ϵ [ h ] 6 + . . .
1 ln ς x [ h ] 2 ln ς y [ h ] = ϵ [ h ] 2 ϵ [ h ] 2 + 4 2 2 3 3 ϵ [ h ] 3 + ¢ 14 [ ] ϵ [ h ] 4 + ¢ 15 [ ] ϵ [ h ] 5 + 6 ϵ [ h ] 6 + . . .
ln ς x n ln ς y n = ϵ [ h ] + 2 2 2 3 ϵ [ h ] 3 + ¢ 16 [ ] ϵ [ h ] 4 + ¢ 17 [ ] ϵ [ h ] 5 + 6 ϵ [ h ] 6 ,
ln ς x [ h ] ln ς y [ h ] ln ς x [ h ] 2 ln ς y [ h ] : = 1 + 2 ϵ [ h ] + 2 2 + 2 3 ϵ [ h ] 2 + ¢ 18 [ ] ϵ [ h ] 4
+ ¢ 19 [ ] ϵ [ h ] 5 ¢ 20 [ ] ϵ [ h ] 6 + ¢ 21 [ ] ϵ [ h ] 7 + . . .
where ¢ 09 [ ] = 2 2 2 + 2 3 ,
¢ 10 [ ] = 5 2 3 7 2 3 + 3 4 ,
¢ 11 [ ] = 12 2 4 + 24 2 2 3 10 2 4 6 3 2 + 4 5 ,
¢ 12 [ ] = 10 2 3 + 14 2 3 5 4 ,
¢ 13 [ ] = 24 2 4 48 2 2 3 + 20 2 4 + 12 3 2 7 5 ,
¢ 14 [ ] = 10 2 3 + 14 2 3 5 4 ,
¢ 15 [ ] = 24 2 4 48 2 2 3 + 20 2 4 + 12 3 2 7 5 ,
¢ 16 [ ] = 5 2 3 + 7 2 3 2 4 ,
¢ 17 [ ] = 12 2 4 24 2 2 3 + 10 2 4 + 6 3 2 3 5 ,
¢ 18 [ ] = 2 2 4 3 2 2 3 2 2 4 + 4 5 ,
¢ 19 [ ] = 32 2 5 + 87 2 3 3 39 2 2 4 46 2 3 2 + 11 2 5 + 19 3 4 ,
¢ 20 [ ] = 16 2 6 11 2 4 3 + 19 2 3 4 + 81 2 2 3 2 12 2 2 5 79 2 3 4 24 3 3 2 6 + 26 3 5 + 15 4 2 ,
¢ 21 [ ] = 132 2 7 465 2 5 3 + 93 2 4 4 + 472 2 3 3 2 16 2 3 5 75 2 2 3 4 138 2 3 3 11 2 3 5 55 2 4 2 + 21 3 2 4 2 3 6 + 41 4 5 ,
ln ς x [ h ] ln ς x [ h ] 2 ln ς y [ h ] = 1 + 2 2 ϵ [ h ] + 2 2 2 + 4 3 ϵ [ h ] 2 + + ¢ 22 [ ] ϵ [ h ] 3 + ¢ 23 [ ] ϵ [ h ] 4 + ¢ 24 [ ] ϵ [ h ] 5 + ¢ 24 [ ] ϵ [ h ] 6 + ¢ 25 [ ] ϵ [ h ] 7 + . . .
¢ 22 [ ] = 4 2 3 + 6 4 ,
¢ 23 [ ] = 4 2 4 6 2 2 3 4 2 4 + 8 5 ,
¢ 24 [ ] = 64 2 5 + 174 2 3 3 78 2 2 4 92 2 3 2 + 22 2 5 + 38 3 4 ,
¢ 24 [ ] = 32 2 6 22 2 4 3 + 38 2 3 4 + 162 2 2 3 2 24 2 2 5 158 2 3 4 48 3 3 2 2 6 + 52 3 5 + 30 4 2
¢ 25 [ ] = 264 2 7 930 2 5 3 + 186 2 4 4 + 944 2 3 3 2 32 2 3 5 150 2 2 3 4 276 2 3 3 22 2 3 5 110 2 4 2 + 42 3 2 4 4 3 6 + 82 4 5 ,
¢ 26 [ ] = 13 2 3 14 2 3 + 3 4 ,
¢ 27 [ ] = 38 2 4 + 64 2 2 3 20 2 4 12 3 2 + 4 5 ,
¢ 28 [ ] = 76 2 5 167 2 3 3 + 56 2 2 4 + 66 2 3 2 13 2 5 17 3 4 ,
¢ 29 [ ] = 152 2 6 + 448 2 4 3 164 2 3 4 324 2 2 3 2 + 46 2 2 5 + 150 2 3 4 + 36 3 3 6 2 6 22 3 5 12 4 2 . Thus,
ln ς y [ h ] ln ς [ ] x [ h ] = 2 ϵ [ h ] 2 + 4 2 2 + 2 3 ϵ [ h ] 3 + + ¢ 26 [ ] ϵ [ h ] 4 + ¢ 27 [ ] ϵ [ h ] 5 + ¢ 28 [ ] ϵ [ h ] 6 + ¢ 29 [ ] ϵ [ h ] 7 + . . .
ln ς y [ h ] ln ς [ ] x [ h ] ln ς x [ h ] ln ς x [ h ] 2 ln ς y [ h ] = 2 ϵ [ h ] 2 + 2 2 2 + 2 3 ϵ [ h ] 3
+ ¢ 30 [ ] ϵ [ h ] 4 + ¢ 31 [ ] ϵ [ h ] 5 + ¢ 32 [ ] ϵ [ h ] 6 + ¢ 33 [ ] ϵ [ h ] 7 + . . .
z [ h ] = 2 3 2 3 ϵ [ h ] 4 + ¢ 34 [ ] ϵ [ h ] 5 + ¢ 35 [ ] ϵ [ h ] 6 + ¢ 36 [ ] ϵ [ h ] 7 + . . .
where ¢ 30 [ ] = 3 2 3 6 2 3 + 3 4 ,
¢ 31 [ ] = 4 2 4 + 12 2 2 3 8 2 4 4 3 2 + 4 5 ,
¢ 32 [ ] = 22 2 5 + 51 2 3 3 18 2 2 4 22 2 3 2 + 3 2 5 + 7 3 4 ,
¢ 33 [ ] = 4 2 6 12 2 4 3 + 4 2 3 4 + 40 2 2 3 2 + 2 2 2 5 30 2 3 4 12 3 3 6 2 6 + 10 3 5 + 6 4 2 ,
¢ 34 [ ] = 4 2 4 + 8 2 2 3 2 2 4 2 3 2 ,
¢ 35 [ ] = 38 2 5 103 2 3 3 + 46 2 2 4 + 55 2 3 2 16 2 5 24 3 4 + 5 6 ,
¢ 36 [ ] = 36 2 6 100 2 4 3 + 52 2 3 4 + 50 2 2 3 2 26 2 2 5 22 2 3 4 + 3 3 3 + 14 2 6 2 3 5 2 4 2 ,
ln ς z [ h ] = + 2 2 2 3 ϵ [ h ] 4 + ¢ 37 [ ] ϵ [ h ] 5 + ¢ 38 [ ] ϵ [ h ] 6 + ¢ 39 [ ] ϵ [ h ] 7 ,
ln ς y [ h ] α ln ς z [ h ] = 2 ϵ [ h ] 2 + 2 2 2 + 2 3 ϵ [ h ] 3 + ¢ 40 [ ] ϵ [ h ] 4 + ¢ 41 [ ] ϵ [ h ] 5 + ¢ 42 [ ] ϵ [ h ] 6 + ¢ 43 [ ] ϵ [ h ] 7 + . . .
1 ln ς y [ h ] α ln ς z [ h ] = 1 2 ϵ [ h ] 2 2 2 2 + 2 3 2 2 ϵ [ h ] + ¢ 44 [ ] 2 + ϵ [ h ] ¢ 45 [ ] 2 + ϵ [ h ] 2 ¢ 46 [ ] 2 + . . .
where ¢ 37 [ ] = 4 2 4 + 8 2 2 3 2 2 4 2 3 2 ,
¢ 38 [ ] = 38 2 5 103 2 3 3 + 46 2 2 4 + 55 2 3 2 16 2 5 24 3 4 + 5 6 ,
¢ 39 [ ] = 36 2 6 100 2 4 3 + 52 2 3 4 + 50 2 2 3 2 26 2 2 5 22 2 3 4 + 3 3 3 + 14 2 6 2 3 5 2 4 2 ,
¢ 40 [ ] = α 2 3 + α 2 3 + 5 2 3 7 2 3 + 3 4 ,
¢ 41 [ ] = 4 α 2 4 8 α 2 2 3 12 2 4 + 2 α 2 4 + 2 α 3 2 + 24 2 2 3 10 2 4 6 3 2 + 4 5 ,
¢ 42 [ ] = 38 α 2 5 + 103 α 2 3 3 46 α 2 2 4 55 α 2 3 2 + 16 α 2 5 + 24 α 3 4 5 α 6 ,
¢ 43 [ ] = 36 α 2 6 + 100 α 2 4 3 52 α 2 3 4 50 α 2 2 3 2 + 26 α 2 2 5 + 22 α 2 3 4 3 α 3 3 14 α 2 6 + 2 α 3 5 + 2 α 4 2 ,
¢ 44 [ ] = α 2 3 + α 2 3 + 5 2 3 7 2 3 + 3 4 2 2 2 2 3 2 2 2 + 2 3 2 2 ,
¢ 45 [ ] = 4 α 2 4 8 α 2 2 3 12 2 4 + 2 α 2 4 + 2 α 3 2 + 24 2 2 3 10 2 4 6 3 2 + 4 5 2 2 2 2 3 α 2 3 + α 2 3 + 5 2 3 7 2 3 + 3 4 2 2 α 2 4 α 2 2 3 2 4 2 2 3 3 2 4 + 4 3 2 2 2 2 + 2 3 2 3 ,
¢ 46 [ ] = 38 α 2 5 + 103 α 2 3 3 46 α 2 2 4 55 α 2 3 2 + 16 α 2 5 + 24 α 3 4 5 α 6 2 2 2 2 3 4 α 2 4 8 α 2 2 3 12 2 4 + 2 α 2 4 + 2 α 3 2 + 24 2 2 3 10 2 4 6 3 2 + 4 5 2 2 α 2 4 α 2 2 3 2 4 2 2 3 3 2 4 + 4 3 2 α 2 3 + α 2 3 + 5 2 3 7 2 3 + 3 4 2 3 + 2 α 2 3 4 α 2 2 3 2 + 2 3 4 2 2 3 2 + 2 2 2 5 6 2 3 4 + 4 3 3 2 2 2 + 2 3 2 4 ,
ln ς z [ h ] ln ς y [ h ] α ln ς z [ h ] = ¢ 47 [ ] ϵ [ h ] 3 + 2 2 3 ϵ [ h ] 2
¢ 48 [ ] ϵ [ h ] 4 + ¢ 49 [ ] ϵ [ h ] 5 + ¢ 50 [ ] ϵ [ h ] 6 + . . .
ln ς x [ h ] ln ς y [ h ] ln ς x [ h ] 2 ln ς y [ h ] 2 + ln ς z [ h ] ln ς y [ h ] α ln ς z [ h ] =
1 + 2 2 ϵ [ h ] + 3 3 ϵ [ h ] 2 + 4 2 3 + 4 2 3 + 4 4 ϵ [ h ] 3 + ¢ 51 [ ] ϵ [ h ] 4 + ¢ 52 [ ] ϵ [ h ] 5 + ¢ 53 [ ] ϵ [ h ] 6 + . . .
ln ς z [ h ] ln ς [ ] x [ h ] = 2 3 2 3 ϵ [ h ] 4 + ¢ 54 [ ] ϵ [ h ] 5 + ¢ 55 [ ] ϵ [ h ] 6 + ¢ 56 [ ] ϵ [ h ] 7 + . . .
where ¢ 47 [ ] = 2 2 3 + 4 2 3 2 4 ,
¢ 48 [ ] = 2 α 2 2 3 + α 3 2 17 3 4 2 + α 2 4 79 2 2 3 + 39 2 4 + 40 3 2 16 5 + 5 6 2 + 29 2 4 ,
¢ 49 [ ] = 12 α 2 3 3 4 α 2 2 4 8 α 2 3 2 170 3 4 + 34 3 5 2 + 34 3 2 4 2 2 10 3 6 2 2 + 4 α 3 4 4 α 2 5 386 2 3 3 + 156 2 2 4 + 346 2 3 2 62 2 5 83 3 3 2 + 24 6 + 4 4 2 2 + 116 2 5 ,
¢ 50 [ ] = 128 α 2 3 4 + 34 α 3 2 4 2 10 3 α 6 2 α 2 3 3 + 166 3 4 2 2 + α 2 2 6 + 62 α 2 6 80 α 3 3 + 4 α 4 2 + 56 4 5 2 68 3 2 5 2 2 68 3 3 4 2 3 33 3 6 2 3 2 4 α 2 3 + 157 3 2 4 2 + 43 3 4 2 2 2 + 20 3 2 6 2 3 15 4 6 2 2 + 3 2 2 α 2 3 2 232 α 2 4 3 + 254 α 2 2 3 2 + 32 α 3 5 + 86 α 2 3 4 32 α 2 2 5 + 10 α 2 6 + 63 2 6 554 3 3 129 4 2 310 2 4 3 14 2 3 4 + 603 2 2 3 2 36 2 2 5 + 23 2 6 + 64 3 5 + 31 2 3 4 ,
¢ 51 [ ] = 2 α 2 2 3 17 3 4 2 + α 3 2 + α 2 4 + 5 6 2 + 34 2 4 8 5 + 44 3 2 93 2 2 3 + 41 2 4 ,
¢ 52 [ ] = 12 α 2 3 3 4 α 2 2 4 8 α 2 3 2 + 34 3 5 2 + 34 3 2 4 2 2 10 3 6 2 2 + 4 α 3 4 4 α 2 5 83 3 3 2 + 4 4 2 2 + 56 2 5 + 24 6 214 2 3 3 + 68 2 2 4 + 246 2 3 2 32 2 5 120 3 4 ,
¢ 53 [ ] = 128 α 2 3 4 + 34 α 3 2 4 2 10 3 α 6 2 α 2 3 3 + 166 3 4 2 2 + α 2 2 6 + 62 α 2 6 80 α 3 3 + 4 α 4 2 + 56 4 5 2 68 3 2 5 2 2 68 3 3 4 2 3 33 3 6 2 3 2 4 α 2 3 + 157 3 2 4 2 + 43 3 4 2 2 2 + 20 3 2 6 2 3 15 4 6 2 2 + 3 2 2 α 2 3 2 232 α 2 4 3 + 254 α 2 2 3 2 + 32 α 3 5 + 86 α 2 3 4 32 α 2 2 5 + 10 α 2 6 37 2 6 602 3 3 90 4 2 144 2 4 3 50 2 3 4 + 665 2 2 3 2 46 2 2 5 + 21 2 6 + 132 3 5 109 2 3 4 ,
¢ 54 [ ] = 6 2 4 + 10 2 2 3 2 2 4 2 3 2 ,
¢ 55 [ ] = 50 2 5 126 2 3 3 + 50 2 2 4 + 62 2 3 2 16 2 5 24 3 4 + 5 6 ,
¢ 56 [ ] = 64 2 6 + 170 2 4 3 52 2 3 4 104 2 2 3 2 + 6 2 2 5 + 36 2 3 4 + 9 3 3 + 4 2 6 2 3 5 2 4 2 ,
v [ h ] = z [ h ] ln ς z [ h ] ln ς [ ] x [ h ] ln ς x [ h ] ln ς y [ h ] ln ς x [ h ] 2 ln ς y [ h ] 2 + ln ς z [ h ] ln ς y [ h ] α ln ς z [ h ] = 4 2 6 8 2 4 3 + 4 2 2 3 2 ϵ [ h ] 7 + ¢ 57 [ ] ϵ [ h ] 8 ,
ϵ [ h ] = 4 2 6 8 2 4 3 + 4 2 2 3 2 ϵ [ h ] 7 + ¢ 57 [ ] ϵ [ h ] 8 ,
where ¢ 57 [ ] = α 2 7 + 3 α 2 5 3 + 70 2 7 3 2 3 α 3 2 299 2 5 3 + 2 α 3 3 + 79 2 4 4 + 401 2 3 3 2 4 2 3 5 196 2 2 3 4 152 2 3 3 13 2 2 6 + 44 2 3 5 + 12 2 4 2 + 63 3 2 4 10 3 6 .
Thus, the theorem is proven.  □

3. Numerical Results

The numerical results of the hybrid multiplicative calculus-based parallel scheme indicate the suggested approaches’ practical implementation as well as their performance in solving (1). This gives quantitative evidence to support theoretical claims, validates the algorithm’s correctness, and compares the findings to existing methods. The numerical results of the parallel schemes emphasize significant findings such as convergence rates, computing efficiency, and error analysis, making them critical for evaluating the overall performance and impact of the investigation (1). Thus, in this section, we demonstrate the effectiveness and stability of the method by examining several engineering applications using the following termination criteria implemented in Maple 18:
TCR - I ϵ [ h ] = x h + 1 x h < = 10 16 ,
TCR - II ϵ [ h ] = ς x h < = 10 16 ,
where ϵ [ h ] represents the residual error using the stopping criteria (TCR-I and TCR-II). Algorithm 1 is used to solve nonlinear Equation (1) using a multiplicative-based numerical scheme SR 1 [ ] .
Algorithm 1: For Numerical Scheme SR 1 [ ]
Mathematics 12 03517 i001
To shows the efficiency and consistency of the proposed multiplicative scheme, we consider the following well-known numerical schemes. Fang et al. [27] proposed the following two seventh-order convergence schemes as
v [ h ] = z [ h ] 5 ς ( x [ h ] ) 2 + 3 ς ( y [ h ] ) 2 ς ( x [ h ] ) 2 + 7 ς ( y [ h ] ) 2 ς ( z [ h ] ) ς ( x [ h ] ) ,
where y [ h ] = x [ h ] ς ( x [ h ] ) ς ( x [ h ] ) , z [ h ] = y [ h ] 3 ς ( x [ h ] ) 2 + ς ( y [ h ] ) 2 2 ς ( x [ h ] ) ς ( y [ h ] ) + 2 ς ( y [ h ] ) 2 ς ( y [ h ] ) ς ( x [ h ] ) . The iterative scheme ( SR 2 [ ] ) has a convergence order of seven and satisfies the following error equations:
v [ h ] ζ x [ h ] ζ 7 = ϵ [ h ] ϵ [ h ] 7 = 2 4 3 2 2 3
and the second scheme ( SR 3 [ ] ) is
v [ h ] = z [ h ] 3 ς ( x [ h ] ) 2 + ς ( y [ h ] ) 2 2 ς ( x [ h ] ) ς ( y [ h ] ) + 2 ς ( y [ h ] ) 2 ς ( z [ h ] ) ς ( x [ h ] ) ,
where y [ h ] = x [ h ] ς ( x [ h ] ) ς ( x [ h ] ) , z [ h ] = y [ h ] ς ( x [ h ] ) + ς ( y [ h ] ) 2 ς ( x [ h ] ) 5 ς ( y [ h ] ) 2 ς ( y [ h ] ) ς ( x [ h ] ) . The iterative scheme (51) has a convergence order of seven and satisfies the following error equations:
v [ h ] ζ x [ h ] ζ 7 = ϵ [ h ] ϵ [ h ] 7 = 1 2 2 4 3 2 2 2 3 .
Hu et al. [28] proposed the following iterative scheme ( SR 3 [ ] ) for approximating one root of (1):
v [ h ] = z [ h ] 2 ς ( x [ h ] ) 2 ς ( x [ h ] ) 2 4 ς ( x [ h ] ) ς ( y [ h ] ) + ς ( y [ h ] ) 2 ς ( z [ h ] ) ς ( x [ h ] ) ,
where y [ h ] = x [ h ] ς ( x [ h ] ) ς ( x [ h ] ) , z [ h ] = y [ h ] ς ( y [ h ] ) ς ( x [ h ] ) + ς ( y [ h ] ) ς ( y [ h ] ) ς ( x [ h ] ) 2 . The iterative scheme (53) has a convergence order of seven and satisfies the following error equations:
v [ h ] ζ x [ h ] ζ 7 = ϵ [ h ] ϵ [ h ] 7 = 10 2 4 2 2 2 3 .
Janngam et al. [29] developed the following seventh-order scheme ( SR 4 [ ] ) as follows:
v [ h ] = z [ h ] ς ( z [ h ] ) 2 ς ( y [ h ] ) ς ( x [ h ] ) 2 ς ( x [ h ] ) ς ( y [ h ] ) ς ( x [ h ] ) ς ( y [ h ] ) + ς ( y [ h ] ) ς ( y [ h ] ) ,
where y [ h ] = x [ h ] ς ( x [ h ] ) ς ( x [ h ] ) , z [ h ] = y [ h ] ς ( y [ h ] ) ς ( y [ h ] ) ς ( x [ h ] ) ς ( y [ h ] ) 2 ς ( y [ h ] ) ς ( x [ h ] ) . The iterative scheme (55) has a convergence order of seven and satisfies the following error equations:
v [ h ] ζ x [ h ] ζ 7 = ϵ [ h ] ϵ [ h ] 7 = 3 2 4 3 3 2 2 3 2 .
Srisarakham et al. [30] proposed the following iterative scheme ( SR 5 [ ] ) for approximating the simple root of (1):
v [ h ] = z [ h ] x [ h ] z [ h ] ς ( z [ h ] ) ς ( x [ h ] ) 2 2 ς ( z [ h ] )
where y [ h ] = x [ h ] ς ( x [ h ] ) ς ( x [ h ] ) , z [ h ] = y [ h ] ς ( y [ h ] ) ς ( y [ h ] ) ς ( y [ h ] ) 2 ς ( y [ h ] ) ς ( x [ h ] ) 2 , ς ( y [ h ] ) = 2 y [ h ] x [ h ] 2 ς ( y [ h ] ) + ς ( x [ h ] ) 3 ς ( y [ h ] ) ς ( x [ h ] ) y [ h ] x [ h ] . The iterative scheme (57) has a convergence order of seven and satisfies the following error equations:
v [ h ] ζ x [ h ] ζ 7 = ϵ [ h ] ϵ [ h ] 7 = 2 2 6 2 5 9 2 2 3 + 3 2 2 4 2 2 3 2 3 2 2 + 3 .

3.1. Example 1: Force Acting Between Particles—Mechanical Engineering Application

Understanding material behavior, structural integrity, and system dynamics in mechanical engineering [31,32] is primarily dependent on the force that exists between particles. Depending on the materials and environment, forces between particles can be gravitational, electromagnetic, or contact. Inter-atomic forces in solids (e.g., van der Waals forces or covalent bonds) influence the strength and deformation properties of stressed materials. Particle interactions in fluids are critical for understanding viscosity, flow dynamics, and turbulence. In mechanical systems, particle force governs friction, wear, and fatigue, all of which affect machine performance and lifespan. Understanding particle interaction is critical for improving production processes in fields such as powder metallurgy and fluidized bed reactors. In order to predict and control system behavior, mechanical engineers must accurately model these forces in order to assure efficacy and safety in design and manufacture. Small-scale manipulation of particle forces can also result in new developments in engineering applications in the fields of materials science and nanotechnology. We present the following nonlinear equation as [33]
ς ( x ) = g 1 [ ] g 2 [ ] x 2 g 3 [ ] 1 x x 2 + g 4 [ ] 2 g 5 [ ] .
where g 1 [ ] = 9.4 × 10 6 C , g 2 [ ] = 2.4 × 10 5 C , g 3 [ ] = 0.885 × 10 12 c 2 N m 2 , g 4 [ ] = 0.1 m , g 5 [ ] = 0.3 N . By putting the values of g 1 [ ] g 5 [ ] in (59),
ς ( x ) = 127.4576271 x 127.4576271 x x 2 + 0.1 0.3 .
Equation (60) has the following real solutions ζ 1 = 0.002371507708 ,   ζ 2 = 21.23940674 . The numerical results of the simple root-finding method for (60) include measurements of elapsed time, percentage convergence (Per-Convergence), percentage divergence (Per-Divergence), and the number of iterations for the iterative schemes SR 1 [ ] and SR 1 [ ] SR 6 [ ] , as shown in Table 1 and Figure 1. In Figure 1, the convergence–divergence region is represented by a pie chart, in which a color map of autumn is used to show that if the method has a larger convergence region than others, it is marked as yellow; if there is divergence, it is marked as red; otherwise, the middle region between them is determined by the rate of convergence or divergence. Figure 1 clearly indicates that when compared to SR 1 [ ] SR 6 [ ] , SR 1 [ ] has a larger convergence region.
A color map of autumn is used in Figure 1 to represent the convergence–divergence region using a pie chart. If a method has more convergence region than another, it is marked as yellow, and if there is divergence, it is marked as red. The midway between them depends on the rate of convergence or divergence. Figure 1 clearly indicates that when compared to SR 1 [ ] SR 6 [ ] , SR 1 [ ] has more convergence region.
Table 1 shows that, in terms of elapsed time, number of iterations, percentage convergence, and divergence, our newly developed methods perform better than existing methods of the same order. Up to = 10 1 , the initial starting values were selected as being as close to the exact solution as possible.
Table 2 presents a numerical analysis of the results to demonstrate the stability of the multiplicative simple root-finding approach when compared to other methods. Table 2 displays A-Iterations, which denotes the average number of iterations; N-Functions, which indicates the total number of functions evaluated; N-Derivative, which indicates the number of derivatives per iteration; and Local-COC, which indicates the local computational order of convergence. The bar graph in Figure 2a depicts the total number of functions and their derivative evaluations in each iteration. In terms of stability, the multiplicative family of the simple root-finding method SR 1 [ ] is more stable compared to SR 1 [ ] SR 6 [ ] .
For the residual error of the numerical scheme utilizing different stopping criteria, i.e., TCR-I and TCR-II, method SR 1 [ ] consumes less total iterations (It-n) and less CPU time Figure 2b than scheme SR 1 [ ] SR 6 [ ] , as presented in Table 2. The residual error graph for SR 1 [ ] and SR 1 [ ] SR 6 [ ] is displayed in Figure 2c. The data in Table 3 and Figure 2a–c suggest that method SR 1 [ ] is more consistent and stable than SR 1 [ ] SR 6 [ ] for solving engineering application 1.

3.2. Example 2: Casson Nanofluids—Mechanical Engineering Application

The Casson equation describes a class of non-Newtonian fluids known as Casson nanofluids, which exhibit yield stress behavior—that is, the fluid only flows when it is subjected to a particular stress threshold. The dispersion of nanoparticles in such fluids improves their thermal conductivity and other physical characteristics. These fluids are actively explored in engineering because of their uncommon combination of solid and fluid properties, which provide significant advantages in heat transfer applications and give rise to the following nonlinear equation as [34]
ς ( x ) = x 8 441 8 63 x 5 0.05714285714 x 4 + 16 9 x 2 3.624489796 x + 0.36 .
Microelectronics cooling, lubrication systems, and biomedical engineering for personalized drug administration are among the most important uses. High-efficiency heat transmission is critical in thermal management systems such as solar collectors and nuclear reactors, which also use Casson nanofluids. They are an excellent option for simplifying industrial activities that require precise temperature control due to their ability to control heat in complex systems. They are also essential in complex production processes like material synthesis and 3D printing due to their improved rheological properties. The exact roots are given as follows:
ζ 1 = 3.822391 ; ζ 2 = 0.104694 ; ζ 3 , 4 = 2.2 ± 1.8 i ; ζ 5 , 6 = 1.2 ± 3.4 i ; ζ 7 , 8 = 1.5 ± 0.9 i .
The exact root of Equation (61) is zero. The color map of autumn is used in Figure 3 to represent the convergence–divergence region using a pie chart. If a method has a larger convergence region than other schemes, it is marked as yellow, and if there is divergence, it is marked as red. The midway between them depends on the rate of convergence or divergence. Figure 3 clearly indicates that when compared to SR 1 [ ] SR 6 [ ] , SR 1 [ ] has a larger convergence region.
Table 4 shows that, in terms of elapsed time, number of iterations, percentage convergence, and divergence, our newly developed methods perform better than existing methods of the same order. Up to = 10 1 , the initial starting values were selected as being as close to the exact solution as possible.
Table 5 presents a numerical analysis of the results to demonstrate the stability of the multiplicative simple root-finding approach when compared to other methods. Table 5 shows the average number of iterations (A-Iterations), the total number of functions evaluated (N-Functions), the number of derivatives assessed each iteration (N-Derivative), and the local computational order of convergence (Local-COC). The bar graph in Figure 4a depicts the total number of functions and their derivative evaluations in each iteration. In term of stability, the multiplicative family of the simple root-finding method SR 1 [ ] is more stable compared to SR 1 [ ] SR 6 [ ] .
For the residual error of the numerical scheme utilizing different stopping criteria, i.e., TCR-I and TCR-II, method SR 1 [ ] consumes less total iterations (It-n) and less CPU time Figure 4b than scheme SR 1 [ ] SR 6 [ ] , as shown in Table 6. The residual error graph for SR 1 [ ] SR 6 [ ] is displayed in Figure 4c. The data in Table 6 and Figure 4a–c suggest that method SR 1 [ ] is more consistent and stable than SR 1 [ ] SR 6 [ ] for solving engineering application 2.

3.3. Example 3: Ocean Engineering Problem

Building and maintaining structures that can withstand harsh maritime environments is a common task for ocean engineering issues. The issues of managing wave, tide, and current forces arise on offshore platforms, pipelines, and coastal defenses. To make matters more complicated are biofouling, corrosion, and material fatigue resulting from exposure to seawater. The needs of technology requirements must be balanced against environmental issues, such as minimizing the impact on marine habitat. The key areas of innovation driving research and solutions in this discipline include deep sea exploration, ocean energy, and sustainable resource extraction. The height of the standing wave is determined by the following nonlinear equations [35]:
h = h 0 sin 2 π x λ 1 [ ] + cos 2 π t λ 2 [ ] λ 1 [ ] + e ( x ) ,
where v is the distance from the wave source and t is the amount of time that has passed since the wave was produced. Therefore, the wave velocity, height of the wave at source, and the wavelength formed are all represented by λ 2 [ ] , h 0 , and λ 1 [ ] , respectively. Using λ 1 [ ] = 6 , t = 12 , λ 1 [ ] = 48 , and h = 0.4 h 0 in (62), the following multiplicative nonlinear function is obtained as
ς ( x ) = e ( x ) + sin π x 8 cos 72 π 0.3 .
Equation (63) has the following solutions: ζ 1 = 0.34 , ζ 2 = 1.6609 , ζ 2 , 3 = 2.88 3.05 i .
The exact root of Equation (63) is zero. A color map of autumn is used in Figure 5 to represent the convergence–divergence region using a pie chart. If a method has a larger convergence region than another, it is marked as yellow, and if there is divergence, it is marked as red. The midway between them depends on the rate of convergence or divergence. Figure 5 clearly indicates that when compared to SR 1 [ ] SR 6 [ ] , SR 1 [ ] has a larger convergence region.
Table 7 shows that, in terms of elapsed time, number of iterations, percentage convergence, and divergence, our newly developed methods perform better than existing methods of the same order. Up to = 10 1 , the initial starting values were selected as being as close to the exact solution as possible.
Table 8 presents a numerical analysis of the results to demonstrate the stability of the multiplicative simple root-finding approach when compared to other methods. Table 8 displays A-Iterations, which denotes the average number of iterations; N-Functions, which indicates the total number of functions evaluated; N-Derivative, which indicates the number of derivatives per iteration; and Local-COC, which indicates the local computational order of convergence. The bar graph in Figure 6a depicts the total number of functions and their derivative evaluations in each iteration. In terms of stability, the multiplicative family of the simple root-finding method SR 1 [ ] is more stable compared to SR 1 [ ] SR 6 [ ] .
For the residual error of the numerical scheme utilizing different stopping criteria, i.e., TCR-I and TCR-II, method SR 1 [ ] consumes less total iterations (It-n) and less CPU time Figure 6b than scheme SR 1 [ ] SR 6 [ ] . The residual error graph for SR 1 [ ] SR 6 [ ] is displayed in Figure 6c. The data in Table 9 and Figure 6a–c suggest that method SR 1 [ ] is more consistent and stable than SR 1 [ ] SR 6 [ ] for solving engineering application 3.

3.4. Example 4: Mechanical Engineering Applications

Particularly in complex, nonlinear systems, numerical methods play a crucial role in solving ordinary differential equations when analytical solutions prove to be unattainable or difficult to find. In domains like engineering, physics, and finance, they are helpful because they provide approximations that can accommodate a wider variety of boundary and beginning circumstances. Unlike analytical methods, numerical procedures can be utilized to handle real-world, data-driven problems including anomalies. These scalable and computationally efficient methods are critical for solving complex equation systems. They also enable iterative development of solutions, which enhances accuracy with controlled error margins. This method also enables a more flexible representation of anomalous diffusion and wave propagation, which improves predictions in fields such as biology and materials science and leads to the following fractional initial value problem [36]:
d 2 ς ( x ) d x 2 + x d ς ( x ) d x 2 + 13 x + 1 = 0 , 0 x 1 , ς ( 0 ) = 0 , d ς ( 0 ) d x = e 2 1 e 2 + 1 .
Using the method from [37], we simulate (64) with the following nonlinear approximation:
3 ( x ) 0.0259011180 x 4 0.1066166681 x 3 0.2099871708 x 2 + 0.7615941560 x 1 . .
Equation (65) has the following solutions: 0.0 , 1.6609 , and 2.88 ± 3.05 i .
The exact root of Equation (65) is zero. A color map of autumn is used in Figure 7 to represent the convergence–divergence region using a pie chart. If a method has a larger convergence region than another, it is marked as yellow, and if there is divergence, it is marked as red. The midway between them depends on the rate of convergence or divergence. Figure 7 clearly indicates that when compared to SR 1 [ ] SR 6 [ ] , SR 1 [ ] has a larger convergence region.
Table 10 shows that, in terms of elapsed time, number of iterations, percentage convergence, and divergence, our newly developed methods perform better than existing methods of the same order. Up to = 10 1 , the initial guess values were selected as being as close to the exact solution as possible.
Table 11 presents a numerical analysis of the results to demonstrate the stability of the multiplicative simple root-finding approach when compared to other methods. Table 11 displays A, which denotes the average number of iterations; N-Functions, which indicates the total number of functions evaluated; N-Derivative, which indicates the number of derivatives per iteration; and Local-COC, which indicates the local computational order of convergence. The bar graph in Figure 8a depicts the total number of functions and their derivative evaluations in each iteration. In terms of stability, the multiplicative family of the simple root-finding method SR 1 [ ] is more stable compared to SR 1 [ ] SR 6 [ ] .
For the residual error of the numerical scheme utilizing different stopping criteria, i.e., TCR-I and TCR-II, method SR 1 [ ] consumes less total iterations (It-n) and less CPU time Figure 8b than scheme SR 1 [ ] SR 6 [ ] . The residual error graph for SR 1 [ ] , SR 1 [ ] SR 6 [ ] is displayed in Figure 8c. The data in Table 12 and Figure 8a–c suggest that method SR 1 [ ] is more consistent and stable than SR 1 [ ] SR 6 [ ] for solving engineering application 4.

3.5. Example 5: Chemical Engineering Applications

Multiplicative calculus is introduced to extend classical differential equations [38,39] and allow for more precise descriptions of physical systems such as electrical circuits, fluid flow, and viscoelastic materials. This method also allows for a more flexible description of anomalous diffusion and wave propagation, which improves predictions in domains such as biology and materials science and leads to the following fractional starting value problem [40]:
d 2 ς ( x ) d x 2 + ς ( x ) 3 = 0 , 0 x 1 , ς ( 0 ) = 0 , d ς ( 0 ) d x = 1 .
Using the method from [41], we simulate (66) with the following nonlinear approximation:
3 ( x ) 0.5 x 3 0.5 x 2 + x .
Equation (67) has the following solutions: 0.1 , 1.414213562 , 1.414213562 .
The exact root of Equation (67) is zero. Figure 9 uses a color map of fall to show the convergence–divergence zone using a pie chart. If one approach has a larger convergence region than another, it is marked in yellow; if there is divergence, it is marked in red. The rate of convergence or divergence determines where they will meet in the middle. Figure 9 demonstrates unequivocally that SR 1 [ ] has a larger convergence region than SR 1 [ ] SR 6 [ ] .
Using the information from Table 13 and the optimal parameter values from the dynamical analysis, these values are applied in parallel schemes to find all solutions to the nonlinear problems. Using initial guesses close to the exact solution = 10 01 , the convergence rate of the simultaneous methods increases, leading to faster convergence to the exact roots with fewer iterations.
Table 14 presents a numerical analysis of the results to demonstrate the stability of the multiplicative simple root-finding approach when compared to other methods. Table 14 displays A-Iterations, which denotes the average number of iterations; N-Functions, which indicates the total number of functions evaluated; N-Derivative, which indicates the number of derivatives per iteration; and Local-COC, which indicates the local computational order of convergence. The bar graph in Figure 10a depicts the total number of functions and their derivative evaluations in each iteration. In terms of stability, the multiplicative family of the simple root-finding method SR 1 [ ] is more stable compared to SR 1 [ ] SR 6 [ ] .
For the residual error of the numerical scheme utilizing different stopping criteria, i.e., TCR-I and TCR-II, method SR 1 [ ] consumes less total iterations (It-n) and less CPU time Figure 10b than scheme SR 1 [ ] SR 6 [ ] . The residual error graph for SR 1 [ ] and SR 1 [ ] SR 6 [ ] is displayed in Figure 10c. The data in Table 15 and Figure 10a–c suggest that method SR 1 [ ] is more consistent and stable than SR 1 [ ] SR 6 [ ] for solving engineering application 5.

4. Summary

The numerical results for the multiplicative calculus-based root-finding techniques SR 1 [ ] , SR 1 [ ] SR 6 [ ] are used in five engineering applications to highlight how well the newly developed technique SR 1 [ ] performs on a number of important criteria. In terms of convergence rate, CPU time, function evaluations, percentage convergence–divergence, and local computational order of convergence and elapsed time, the findings, which are compiled in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15, offer a thorough comparison of various approaches and demonstrate the superiority of SR 1 [ ] over SR 1 [ ] SR 6 [ ] .
Rates of convergence and stability: In all engineering applications, SR 1 [ ] continuously attains a higher rate of convergence, exhibiting stability and robustness in approximation solutions. The reliability of SR 1 [ ] , even in complicated engineering contexts, is seen in Table 1, Table 4, Table 7 and Table 10, where its percentage convergence outperforms that of alternative approaches SR 1 [ ] SR 6 [ ] . Such results imply that SR 1 [ ] can be used with assurance in real-world engineering situations where reliable iterative techniques are needed.
CPU Utilization and Efficiency: Based on Table 2, SR 1 [ ] takes a significantly shorter amount of CPU time to achieve solutions than SR 1 [ ] SR 6 [ ] . This effectiveness not only lowers computational expenses but also establishes SR 1 [ ] as a time-efficient solution, which is particularly advantageous for large-scale engineering applications where computational resources are frequently limited. The reality that SR 1 [ ] utilizes CPU time effectively without losing accuracy highlights its advantages for real-time applications where speed is crucial.
Function Assessments and Computing Expenses: In evaluating the computational cost of iterative techniques, function evaluations are essential. According to Table 2, SR 1 [ ] needs fewer function evaluations each iteration than SR 1 [ ] SR 6 [ ] . Iterative technique SR 1 [ ] is more computationally efficient because fewer function calls mean less computation overall. The efficiency of SR 1 [ ] in terms of function evaluations is a major improvement for engineering problems that require calculations that require a lot of resources.
Local Computational Order of Convergence: In comparison to SR 1 [ ] SR 6 [ ] , Table 3 shows that SR 1 [ ] attains a higher local COC, indicating a quicker approach to the exact solution. This attribute is especially helpful in situations where high-accuracy solutions must be found in fewer iterations. Faster convergence and less sensitivity to initial guesses are two benefits of SR 1 [ ] ’s high Local-COC, which increases its applicability to a wider range of initial settings in engineering applications.
Thus, it is evident from the comparison study that SR 1 [ ] is a novel and effective tool for engineering computations and that it is superior to existing iterative techniques, i.e., SR 1 [ ] SR 6 [ ] . The method’s consistency across many metrics—convergence rate, CPU time, function evaluations, convergence–divergence percentage, Local-COC, and elapsed time—demonstrates its novelty and practicality. According to the results presented in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15, SR 1 [ ] is not only more effective but also flexible enough to handle complicated engineering problems, which makes it a valuable development for applications where precision and processing power are critical.

5. Conclusions

To find simple solutions to multiplicative nonlinear equations, we created a new class of multiplicative-type fractional techniques. According to our convergence analysis, the multiplicative calculus-based parallel schemes have a convergence order of seven. To enhance the convergence rate of the SR 1 [ ] , a pie chart was created to analyze the percentage convergence (see Table 1, Table 4, Table 7, Table 10 and Table 13 and Figure 1, Figure 3, Figure 5, Figure 7 and Figure 9). The total number of functions and derivative evaluations can be calculated to determine the efficiency of the created approach in comparison to existing methods (Figure 2a, Figure 4a, Figure 6a, Figure 8a and Figure 10a). Several nonlinear problems were tested to check the stability and consistency of SR 1 [ ] in comparison to SR 1 [ ] SR 6 [ ] . The numerical results in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 demonstrate that the SR 1 [ ] families of parallel schemes are more stable and consistent in terms of CPU time (Figure 6), consistency (Table 2, Table 5, Table 8, Table 11 and Table 14), stability (Table 3, Table 6, Table 9, Table 12 and Table 15), and elapsed time (Figure 2b, Figure 4b, Figure 6b, Figure 8b and Figure 10b) and error graphs (Figure 2c, Figure 4c, Figure 6c, Figure 8c and Figure 10c) for different α R , outperforming both SR 1 [ ] SR 6 [ ] methods.
In future work, a new, efficient hybrid multiplicative calculus-based inverse parallel scheme having global convergence behavior [42,43,44] will be developed to address complex engineering and vectorial problems.

Author Contributions

Conceptualization, M.S.; methodology, M.S.; software, M.S. and N.K.; validation, M.S. and N.K.; formal analysis, M.S. and I.A.Ș.; investigation, M.S.; resources, M.S., N.K. and I.A.Ș.; writing—original draft preparation, M.S. and N.K.; writing—review and editing, I.A.Ș.; visualization, M.S.; supervision, M.S.; project administration, I.A.Ș.; funding acquisition, I.A.Ș. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding and first author.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

References

  1. Blair, P.M.; Weinaug, C.F. Solution of two-phase flow problems using implicit difference equations. Soci. Pet. Eng. J. 1969, 9, 417–424. [Google Scholar] [CrossRef]
  2. Feng, J.; Hu, H.H.; Joseph, D.D. Direct simulation of initial value problems for the motion of solid bodies in a Newtonian fluid Part 1. Sedimentation. J. Fluid Mech. 1994, 261, 5–134. [Google Scholar] [CrossRef]
  3. Levy, D. Chaos theory and strategy: Theory, application, and managerial implications. Strateg. Manag. J. 1994, 15, 167–178. [Google Scholar] [CrossRef]
  4. Shen, S.; Yang, Z.; Li, X.; Zhang, S. Periodic propagation of complex-valued hyperbolic-cosine-Gaussian solitons and breathers with complicated light field structure in strongly nonlocal nonlinear media. Commun. Nonlinear Sci. Numer. Simul. 2021, 103, 106005. [Google Scholar] [CrossRef]
  5. Jia, T.; Liu, Y.Y.; Csóka, E.; Pósfai, M.; Slotine, J.J.; Barabási, A.L. Emergence of bimodality in controlling complex networks. Nature Commun. 2013, 4, 1–6. [Google Scholar] [CrossRef]
  6. Govindaraj, M.; Vetriventhan, M.; Srinivasan, M. Importance of genetic diversity assessment in crop plants and its recent advances: An overview of its analytical perspectives. Genetics Res. Intern. 2015, 1, 431487. [Google Scholar] [CrossRef] [PubMed]
  7. Zou, Z.; Guo, R. The Riemann–Hilbert approach for the higher-order Gerdjikov–Ivanov equation, soliton interactions and position shift. Commun. Nonlinear Sci. Numer. Simul. 2023, 124, 107316. [Google Scholar] [CrossRef]
  8. Shen, S.; Yang, Z.J.; Pang, Z.G.; Ge, Y.R. The complex-valued astigmatic cosine-Gaussian soliton solution of the nonlocal nonlinear Schrödinger equation and its transmission characteristics. Appl. Math. Lett. 2022, 125, 107755. [Google Scholar] [CrossRef]
  9. Bashirov, A.E.; Kurpınar, E.M.; Özyapıcı, A. Multiplicative calculus and its applications. J. Math. Anal. Appl. 2008, 337, 36–48. [Google Scholar] [CrossRef]
  10. Willinger, W.; Govindan, R.; Jamin, S.; Paxson, V.; Shenker, S. Scaling phenomena in the Internet: Critically examining criticality. Proc. Natl. Acad. Sci. USA 2002, 99, 2573–2580. [Google Scholar] [CrossRef]
  11. Harima, Y.; Sakamoto, Y.; Tanaka, S.I.; Kawai, M. Validity of the geometric-progression formula in approximating gamma-ray buildup factors. Nuclear Sci. Eng. 1986, 94, 24–35. [Google Scholar] [CrossRef]
  12. Li, J.; Yang, Z.J.; Zhang, S.M. Periodic collision theory of multiple cosine-Hermite-Gaussian solitons in Schrödinger equation with nonlocal nonlinearity. Appl. Math. Lett. 2023, 140, 108588. [Google Scholar] [CrossRef]
  13. Zhang, J.G.; Song, Q.R.; Zhang, J.Q.; Wang, F. Application of he’s frequency formula to nonlinear oscillators with generalized initial conditions. Facta Univ. Ser. Mech. Eng. 2023, 21, 701–712. [Google Scholar] [CrossRef]
  14. Goktas, S.; Yilmaz, E.; Yar, A.C. Multiplicative derivative and its basic properties on time scales. Math. Meth. Appl. Sci. 2022, 45, 2097–2109. [Google Scholar] [CrossRef]
  15. Córdova-Lepe, F. The multiplicative derivative as a measure of elasticity in economics. TEMAT-Theaeteto Atheniensi Mathem. 2006, 2, 1–8. [Google Scholar]
  16. Özyapıcı, A.; Sensoy, Z.B.; Karanfiller, T. Effective Root-Finding Methods for Nonlinear Equations Based on Multiplicative Calculi. J. Math. 2016, 2016, 8174610. [Google Scholar] [CrossRef]
  17. Solaiman, O.S.; Hashim, I. Optimal Eighth-Order Solver for Nonlinear Equations with Applications in Chemical Engineering. Intell. Autom. Soft. Comp. 2021, 27, 1–10. [Google Scholar] [CrossRef]
  18. Liu, L.; Wang, X. Eighth-order methods with high efficiency index for solving nonlinear equations. Appl. Math. Comp. 2010, 215, 3449–3454. [Google Scholar] [CrossRef]
  19. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
  20. Gasimov, Y.S. Some shape optimization problems for eigenvalues. J. Phys. A Math. Theor. 2008, 41, 055202. [Google Scholar] [CrossRef]
  21. Unal, E.; Cumhur, I.; Gokdogan, A. Multiplicative Newton’s Methods with Cubic Convergence. New Trends Math. Sci. 2017, 5, 299–307. [Google Scholar] [CrossRef]
  22. Stanley, D. A multiplicative calculus. Probl. Resour. Issues Math. Undergrad. Stud. 1999, 9, 310–326. [Google Scholar] [CrossRef]
  23. Misirli, E.; Ozyapici, A. Exponential approximations on multiplicative calculus. Proc. Jangjeon Math. Soc. 2009, 12, 227–236. [Google Scholar]
  24. Singh, G.; Bhalla, S.; Behl, R. Higher-order multiplicative derivative iterative scheme to solve the nonlinear problems. Math. Comp. Appl. 2023, 28, 23. [Google Scholar] [CrossRef]
  25. Waseem, M.; Noor, M.A.; Shah, F.A.; Noor, K.I. An efficient technique to solve nonlinear equations usingmultiplicative calculus. Turkish J. Math. 2018, 42, 679–691. [Google Scholar] [CrossRef]
  26. Kou, J.; Li, Y.; Wang, X. Some variants of Ostrowski’s method with seventh-order convergence. J. Comput. Appl. Math. 2007, 209, 153–159. [Google Scholar] [CrossRef]
  27. Fang, L.; Guo, L.; Hu, Y.; Pang, L. Seventh-order Convergent Iterative Methods for Solving Nonlinear Equations. Inter. J. Appl. Sci. Math. 2016, 3, 195–197. [Google Scholar]
  28. Hu, Y.; Fang, L. A seventh-order convergent Newton-type method for solving nonlinear equations. In Proceedings of the 2010 Second International Conference on Computational Intelligence and Natural Computing, Wuhan, China, 13–14 September 2010; Volume 2, pp. 13–15. [Google Scholar]
  29. Janngam, P.; Tongsan, W.; Comemuang, C. The Seventh-Order Iterative Methods for Solving Nonlinear Equations. Burapha Sci. J. 2023, 28, 1910–1918. [Google Scholar]
  30. Srisarakham, N.; Thongmoon, M. A note on three-step iterative method with seventh order of convergence for solving nonlinear equations. Thai J. Math. 2016, 14, 565–573. [Google Scholar]
  31. Jafari, H.; Ganji, R.M.; Ganji, D.D.; Hammouch, Z.; Gasimov, Y.S. A novel numerical method for solving fuzzy variable-order differential equations with Mittag-Leffler kernels. Fractals 2023, 31, 2340063. [Google Scholar] [CrossRef]
  32. He, J.H.; Yang, Q.; He, C.H.; Abdulrahman, A.A. Pull-down instability of the quadratic nonlinear oscillators. Facta Univ. Ser. Mech. Eng. 2023, 21, 191–200. [Google Scholar] [CrossRef]
  33. Gilat, A.; Subramaniam, V. Numerical methods for engineers and scientists. In An Introduction with Applications Using MATLAB; Wiley: Hoboken, NJ, USA, 2013; p. 20014. [Google Scholar]
  34. Billo, E.J. Excel for Scientists and Engineers: Numerical Methods; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  35. Shams, M.; Carpentieri, B. On highly efficient fractional numerical method for solving nonlinear engineering models. Mathematics 2023, 11, 4914. [Google Scholar] [CrossRef]
  36. Shams, M.; Kausar, N.; Samaniego, C.; Agarwal, P.; Ahmed, S.F.; Momani, S. On efficient fractional Caputo-type simultaneous scheme for finding all roots of polynomial equations with biomedical engineering applications. Fractals 2023, 31, 2340075. [Google Scholar] [CrossRef]
  37. Ziada, E.A.A. Solution of Nonlinear Fractional Differential Equations Using Adomain Decomposition Method. J. Syst. Sci. Appl. Math. 2021, 6, 111–119. [Google Scholar]
  38. Juraev, D.A.; Gasimov, Y.S. On the regularization Cauchy problem for matrix factorizations of the Helmholtz equation in a multidimensional bounded domain. Azerb. J. Math. 2022, 12, 142–161. [Google Scholar]
  39. He, J.H.; Moatimid, G.M.; Zekry, M.H. Forced nonlinear oscillator in a fractal space. Facta Univ. Ser. Mech. Eng. 2022, 20, 001–020. [Google Scholar] [CrossRef]
  40. Shams, M.; Rafiq, N.; Kausar, N.; Agarwal, P.; Park, C.; Mir, N.A. On highly efficient derivative-free family of numerical methods for solving polynomial equation simultaneously. Adv. Differ. Equ. 2021, 2021, 1–10. [Google Scholar] [CrossRef]
  41. Ray, S.S.; Bera, R.K. An approximate solution of a nonlinear fractional differential equation by Adomian decomposition method. Appl. Math. Comp. 2005, 167, 561–571. [Google Scholar]
  42. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
  43. Can, N.H.; Nikan, O.; Rasoulizadeh, M.N.; Jafari, H.; Gasimov, Y.S. Numerical computation of the time non-linear fractional generalized equal width model arising in shallow water channel. Thermal Sci. 2020, 24, 49–58. [Google Scholar] [CrossRef]
  44. Cordero, A.; Reyes, J.A.; Torregrosa, J.R.; Vassileva, M.P. Stability Analysis of a New Fourth-Order Optimal Iterative Scheme for Nonlinear Equations. Axioms 2023, 13, 34. [Google Scholar] [CrossRef]
Figure 1. The percentage convergence and divergence of the numerical schemes SR 1 [ ] SR 6 [ ] are provided from top left to end, respectively, for solving engineering applications in example 1.
Figure 1. The percentage convergence and divergence of the numerical schemes SR 1 [ ] SR 6 [ ] are provided from top left to end, respectively, for solving engineering applications in example 1.
Mathematics 12 03517 g001
Figure 2. (ac) The number of functions and its derivative evaluations, elapsed time and residual error of the iterative technique SR 1 [ ] SR 6 [ ] for solving engineering applications in examples 1.
Figure 2. (ac) The number of functions and its derivative evaluations, elapsed time and residual error of the iterative technique SR 1 [ ] SR 6 [ ] for solving engineering applications in examples 1.
Mathematics 12 03517 g002
Figure 3. The percentage convergence and divergence of the numerical schemes SR 1 [ ] SR 6 [ ] are provided from top left to end, respectively, for solving engineering applications in example 2.
Figure 3. The percentage convergence and divergence of the numerical schemes SR 1 [ ] SR 6 [ ] are provided from top left to end, respectively, for solving engineering applications in example 2.
Mathematics 12 03517 g003
Figure 4. (ac) The number of functions and its derivative evaluations, elapsed time and residual error of the iterative technique SR 1 [ ] SR 6 [ ] for solving engineering applications in examples 2.
Figure 4. (ac) The number of functions and its derivative evaluations, elapsed time and residual error of the iterative technique SR 1 [ ] SR 6 [ ] for solving engineering applications in examples 2.
Mathematics 12 03517 g004
Figure 5. The percentage convergence and divergence of the numerical schemes SR 1 [ ] SR 6 [ ] are provided from top left to end, respectively, for solving engineering applications in example 3.
Figure 5. The percentage convergence and divergence of the numerical schemes SR 1 [ ] SR 6 [ ] are provided from top left to end, respectively, for solving engineering applications in example 3.
Mathematics 12 03517 g005
Figure 6. (ac) The number of functions and its derivative evaluations, elapsed time and residual error of the iterative technique SR 1 [ ] SR 6 [ ] for solving engineering applications in examples 3.
Figure 6. (ac) The number of functions and its derivative evaluations, elapsed time and residual error of the iterative technique SR 1 [ ] SR 6 [ ] for solving engineering applications in examples 3.
Mathematics 12 03517 g006
Figure 7. The percentage convergence and divergence of the numerical schemes SR 1 [ ] SR 6 [ ] are provided from top left to end, respectively, for solving engineering applications in example 4.
Figure 7. The percentage convergence and divergence of the numerical schemes SR 1 [ ] SR 6 [ ] are provided from top left to end, respectively, for solving engineering applications in example 4.
Mathematics 12 03517 g007
Figure 8. (ac) The number of functions and its derivative evaluations, elapsed time and residual error of the iterative technique SR 1 [ ] SR 6 [ ] for solving engineering applications in examples 4.
Figure 8. (ac) The number of functions and its derivative evaluations, elapsed time and residual error of the iterative technique SR 1 [ ] SR 6 [ ] for solving engineering applications in examples 4.
Mathematics 12 03517 g008
Figure 9. The percentage convergence and divergence of the numerical schemes SR 1 [ ] SR 6 [ ] are provided from top left to end, respectively, for solving engineering applications in example 5.
Figure 9. The percentage convergence and divergence of the numerical schemes SR 1 [ ] SR 6 [ ] are provided from top left to end, respectively, for solving engineering applications in example 5.
Mathematics 12 03517 g009
Figure 10. (ac) The number of functions and its derivative evaluations, elapsed time and residual error of the iterative technique SR 1 [ ] SR 6 [ ] for solving engineering applications in examples 5.
Figure 10. (ac) The number of functions and its derivative evaluations, elapsed time and residual error of the iterative technique SR 1 [ ] SR 6 [ ] for solving engineering applications in examples 5.
Mathematics 12 03517 g010
Table 1. Percentage convergence of the schemes for (60).
Table 1. Percentage convergence of the schemes for (60).
SchemesIterationsPer-ConvergencePer-DivergenceElapsed Time
SR 1 [ ] 2.055%0.0% 1.342 × 10 5
SR 1 [ ] 3.040%0.0% 4.657 × 10 4
SR 2 [ ] 4.025%0.0% 8.125 × 10 4
SR 3 [ ] 5.00.0%100% 6.762 × 10 4
SR 4 [ ] 5.030%0.0% 1.227 × 10 4
SR 5 [ ] 20.020%0.0% 1.561 × 10 4
SR 6 [ ] 20.020%0.0% 9.961 × 10 4
Table 2. Efficiency analysis of the numerical schemes for (60).
Table 2. Efficiency analysis of the numerical schemes for (60).
Err SR 1 [ ] SR 1 [ ] SR 2 [ ] SR 3 [ ] SR 4 [ ] SR 5 [ ] SR 6 [ ]
A-Iterations 2.0 3.0 4.0 20.0 5.0 7.0 7.0
N-Functions 4.0 4.0 5.0 20.0 6.0 4.0 4.0
N-Derivatives 4.0 4.0 5.0 20.0 6.0 4.0 4.0
Total Functions 8.0 8.0 10.0 40.0 12.0 8.0 8.0
Local-COC 7.0457 6.913 6.253 1.879 6.765 6.913 6.098
Table 3. Residual error of the multiplicative and classical root-finding methods for solving engineering applications (60).
Table 3. Residual error of the multiplicative and classical root-finding methods for solving engineering applications (60).
Err SR 1 [ ] SR 1 [ ] SR 2 [ ] SR 3 [ ] SR 4 [ ] SR 5 [ ] SR 6 [ ]
TCR-I 1.1 × 10 25 0.1 × 10 9 5.1 × 10 12 2.0 × 10 8 0.7 × 10 16 0.2 × 10 10 0.2 × 10 10
TCR-II 1.9 × 10 41 1.2 × 10 15 1.5 × 10 15 5.1 × 10 13 3.5 × 10 19 1.0 × 10 14 1.0 × 10 14
It-n 3.0 4.0 5.0 5.0 6.0 20.0 6.0
CPU-T 0.00045 0.00063 0.00143 0.001154 0.00223 0.014114 0.01411
Table 4. Percentage convergence of the schemes for (61).
Table 4. Percentage convergence of the schemes for (61).
SchemesIterationsPer-ConvergencePer-DivergenceElapsed Time
SR 1 [ ] 5.075.9876%0.0% 9.192 × 10 5
SR 1 [ ] 5.065.564%0.0% 3.100 × 10 4
SR 2 [ ] 6.037.87%0.0% 1.960 × 10 3
SR 3 [ ] 20.00.0%100% 9.092 × 10 2
SR 4 [ ] 5.054.7865%0.0% 0.016 × 10 4
SR 5 [ ] 7.031.8987%0.0% 6.201 × 10 3
SR 6 [ ] 6.00.0%100% 0.161 × 10 3
Table 5. Efficiency analysis of the numerical schemes for (60).
Table 5. Efficiency analysis of the numerical schemes for (60).
Err SR 1 [ ] SR 1 [ ] SR 2 [ ] SR 3 [ ] SR 4 [ ] SR 5 [ ] SR 6 [ ]
A-Iterations 2.0 3.0 4.0 20.0 5.0 7.0 7.0
N-Functions 4.0 4.0 5.0 20.0 6.0 4.0 4.0
N-Derivatives 4.0 4.0 5.0 20.0 6.0 4.0 4.0
Total Functions 8.0 8.0 10.0 40.0 12.0 8.0 8.0
Local-COC 7.845 6.01 6.23 1.67 6.99 6.87 6.87
Table 6. Residual error of the multiplicative and classical root-finding methods for solving engineering applications (61).
Table 6. Residual error of the multiplicative and classical root-finding methods for solving engineering applications (61).
Err SR 1 [ ] SR 1 [ ] SR 2 [ ] SR 3 [ ] SR 4 [ ] SR 5 [ ] SR 6 [ ]
TCR-I 0.1 × 10 25 0.1 × 10 23 7.7 × 10 17 0.9 × 10 7 6.7 × 10 11 5.5 × 10 11 9.2 × 10 9
TCR-II 2.8 × 10 38 6.0 × 10 34 9.0 × 10 15 6.6 × 10 17 0.5 × 10 19 1.7 × 10 19 0.6 × 10 13
It-n 3.0 3.0 5.0 7.0 6.0 20.0 20.0
CPU-T 0.0004543 0.0006354 0.0001323 0.008754 0.0092365 0.0007618 0.0007615
Table 7. Percentage convergence of the schemes for (63).
Table 7. Percentage convergence of the schemes for (63).
SchemesIterationsPer-ConvergencePer-DivergenceElapsed Time
SR 1 [ ] 2.085.234635%0.0% 0.990 × 10 6
SR 1 [ ] 3.079.753664%0.0% 8.190 × 10 4
SR 2 [ ] 4.025.456525%0.0% 9.860 × 10 3
SR 3 [ ] 2020.9765%30.430% 4.792 × 10 3
SR 4 [ ] 5.030.764564%0.0% 7.087 × 10 2
SR 5 [ ] 7.023.05426450.0% 1.109 × 10 3
SR 6 [ ] 2017.87684%82.543% 0.101 × 10 3
Table 8. Efficiency analysis of the numerical schemes for (63).
Table 8. Efficiency analysis of the numerical schemes for (63).
Err SR 1 [ ] SR 1 [ ] SR 2 [ ] SR 3 [ ] SR 4 [ ] SR 5 [ ] SR 6 [ ]
A-Iterations 3.0 3.0 4.0 20.0 5.0 7.0 9.0
N-Functions 2.0 4.0 5.0 20.0 8.0 11.0 4.0 .
N-Derivatives 4.0 8.0 7.0 20.0 6.0 6.0 7.0
Total Functions 8.0 6.0 10.0 40.0 12.0 8.0 8.0
Local-COC 7.013 7.004 6.576 1.906 6.657 6.918 6.056
Table 9. Residual error of the multiplicative and classical root-finding method for solving engineering applications (63).
Table 9. Residual error of the multiplicative and classical root-finding method for solving engineering applications (63).
Err SR 1 [ ] SR 1 [ ] SR 2 [ ] SR 3 [ ] SR 4 [ ] SR 5 [ ] SR 6 [ ]
TCR-I 8.7 × 10 25 4.1 × 10 19 9.1 × 10 12 1.1 × 10 2 8.6 × 10 11 7.7 × 10 10 1.9 × 10 15
TCR-II 0.9 × 10 39 0.2 × 10 24 0.5 × 10 19 0.8 × 10 3 3.0 × 10 14 0.7 × 10 13 1.5 × 10 20
It-n 3.0 4.0 5.0 20.0 6.0 20.0 6.0
CPU-T 0.0004584 0.001976 0.0009023 0.0087711 0.00049873 0.0091943 0.0079143
Table 10. Percentage convergence of the schemes for (65).
Table 10. Percentage convergence of the schemes for (65).
α IterationsPer-ConvergencePer-DivergenceElapsed Time
SR 1 [ ] 4.075.8686835%0.0% 1.1766526
SR 1 [ ] 5.020.5784798%0.0% 1.1978673
SR 2 [ ] 6.025.9756697%0.0% 4.0655675
SR 3 [ ] 6.037.0168978%0.0% 2.1244534
SR 4 [ ] 7.030.979757%0.0% 4.7686774
SR 5 [ ] 8.047.9806678%0.0% 2.9888467
SR 6 [ ] 20.00.0%100% 5.5674745
Table 11. Efficiency analysis of the numerical schemes for (65).
Table 11. Efficiency analysis of the numerical schemes for (65).
Err SR 1 [ ] SR 1 [ ] SR 2 [ ] SR 3 [ ] SR 4 [ ] SR 5 [ ] SR 6 [ ]
A-Iterations 3.0 3.0 6.0 5.0 5.0 7.0 20.0
N-Functions 5.0 4.0 5.0 3.0 6.0 4.0 20.0
N-Derivatives 6.0 5.0 5.0 4.0 6.0 4.0 23.0
Total Functions 8.0 7.0 10.0 9.0 11.0 8.0 40.0
Local-COC 7.309 6.987 6.076 6.875 5.237 6.017 2.879
Table 12. Residual error of the multiplicative and classical root-finding method for solving engineering applications (65).
Table 12. Residual error of the multiplicative and classical root-finding method for solving engineering applications (65).
Err SR 1 [ ] SR 1 [ ] SR 2 [ ] SR 3 [ ] SR 4 [ ] SR 5 [ ] SR 6 [ ]
TCR-I 0.3 × 10 11 7.1 × 10 5 8.1 × 10 3 0.6 × 10 10 0.1 × 10 11 1.7 × 10 7 9.2 × 10 2
TCR-II 8.9 × 10 31 3.0 × 10 14 7.7 × 10 15 5.4 × 10 13 1.5 × 10 19 0.1 × 10 15 8.9 × 10 5
It-n 3.0 4.0 5.0 5.0 6.0 7.0 20.0
CPU-T 0.0002346 0.0036345 0.008753 0.097911 0.0047788 0.0046446 0.003985
Table 13. Percentage convergence of the schemes for (67).
Table 13. Percentage convergence of the schemes for (67).
α IterationsPer-ConvergencePer-DivergenceElapsed Time
SR 1 [ ] 2.085.9868903456%0.0% 1.1528677
SR 1 [ ] 3.084.0894567845%0.0% 0.1678757
SR 2 [ ] 4.037.08783457%0.0% 4.8907655
SR 3 [ ] 4.00.0%100% 5.7567926
SR 4 [ ] 7.011.98907%89.546% 4.2567587
SR 5 [ ] 8.039.97857846%0.0% 6.5755566
SR 6 [ ] 20.025.04524786%0.0% 7.5645741
Table 14. Efficiency analysis of the numerical schemes for (67).
Table 14. Efficiency analysis of the numerical schemes for (67).
Err SR 1 [ ] SR 1 [ ] SR 2 [ ] SR 3 [ ] SR 4 [ ] SR 5 [ ] SR 6 [ ]
A-Iterations 2.0 3.0 4.0 20.0 5.0 3.0 20.0
N-Functions 5.0 6.0 9.0 25.0 6.0 8.0 21.0
N-Derivatives 3.0 5.0 7.0 23.0 8.0 6.0 27.0
Total Functions 7.0 6.0 9.0 40.0 9.0 4.0 44.0
Local-COC 7.045 6.91 6.23 1.11 6.23 6.11 5.897
Table 15. Residual error of the multiplicative and classical root-finding method for solving engineering applications (65).
Table 15. Residual error of the multiplicative and classical root-finding method for solving engineering applications (65).
Err SR 1 [ ] SR 1 [ ] SR 2 [ ] SR 3 [ ] SR 4 [ ] SR 5 [ ] SR 6 [ ]
TCR-I 0.5 × 10 21 0.1 × 10 19 0.1 × 10 10 1.5 × 10 2 6.7 × 10 16 1.1 × 10 10 9.9 × 10 1
TCR-II 4.9 × 10 39 3.2 × 10 24 1.5 × 10 17 0.7 × 10 3 6.6 × 10 19 2.7 × 10 15 1.9 × 10 3
It-n 3.0 4.0 5.0 20.0 6.0 8.0 20.0
CPU-T 0.0125453 0.0256475 0.0453635 0.0436646 0.0498976 0.0246866 0.0576865
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M.; Kausar, N.; Șomîtcă, I.A. Efficient Multiplicative Calculus-Based Iterative Scheme for Nonlinear Engineering Applications. Mathematics 2024, 12, 3517. https://doi.org/10.3390/math12223517

AMA Style

Shams M, Kausar N, Șomîtcă IA. Efficient Multiplicative Calculus-Based Iterative Scheme for Nonlinear Engineering Applications. Mathematics. 2024; 12(22):3517. https://doi.org/10.3390/math12223517

Chicago/Turabian Style

Shams, Mudassir, Nasreen Kausar, and Ioana Alexandra Șomîtcă. 2024. "Efficient Multiplicative Calculus-Based Iterative Scheme for Nonlinear Engineering Applications" Mathematics 12, no. 22: 3517. https://doi.org/10.3390/math12223517

APA Style

Shams, M., Kausar, N., & Șomîtcă, I. A. (2024). Efficient Multiplicative Calculus-Based Iterative Scheme for Nonlinear Engineering Applications. Mathematics, 12(22), 3517. https://doi.org/10.3390/math12223517

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop