Next Article in Journal
CDMed: Medication Recommendation via Causal Inference and Dual-Granularity Information Enhancement
Previous Article in Journal
Quaternion DMP with Controllable Final Angular Velocity for Robot Skill Generalization
Previous Article in Special Issue
Real-Time Hand Gesture Recognition for IoT Devices Using FMCW mmWave Radar and Continuous Wavelet Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Time-Series Modeling Based on a Modified Volterra Neural Network

Department of Computer and Communication, Shu-Te University; Kaohsiung 824, Taiwan
Electronics 2026, 15(10), 2086; https://doi.org/10.3390/electronics15102086
Submission received: 1 April 2026 / Revised: 10 May 2026 / Accepted: 12 May 2026 / Published: 13 May 2026
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 4th Edition)

Abstract

This paper proposes a novel neural network model that integrates a modified Volterra digital filter with a feedforward neural network for time-series modeling. In the proposed architecture, all input signals in the conventional Volterra filter are replaced by corresponding output signals, since time-series problems typically consist of observable output sequences over time without explicit external inputs. These output signals, together with their cross-product terms, are constructed as input vectors for the feedforward neural network. To optimize the network parameters, including weights and thresholds, the well-known particle swarm optimization (PSO) algorithm is employed. Based on the proposed PSO-trained neural network model, two types of time series are investigated: chaotic time series and financial time series involving exchange rates. For each case, multiple independent runs with different initial conditions are conducted to ensure the robustness of the proposed method. Furthermore, the effects of varying filter orders and population sizes on modeling performance are also examined.

1. Introduction

To successfully solve most system-modeling problems, three steps are typically completed: (i) collection of input–output pairs of the system, (ii) construction of a suitable mathematical model, and (iii) development of a parameter adjustment mechanism. First, an appropriate mathematical model is established. The input signal, identical to the collected system input, is then applied to the model to generate the corresponding output. For system modeling, it is necessary to compute the error signals between the model outputs and the actual system outputs. Subsequently, a parameter adjustment mechanism is employed to update the model parameters so as to minimize these errors. In other words, the model output is expected to approximate the actual system output as closely as possible through iterative optimization. Once the mathematical model has been successfully trained, it can produce outputs identical or very close to those of the original system when the same input signal is applied.
Neural network models are among the most commonly used mathematical tools for system modeling due to their strong learning capability and the large number of weights connecting neurons across different layers. Recently, extensive studies on and applications of neural network models have been widely reported in the literature [1,2,3,4,5,6,7,8,9,10,11]. In [1], for example, the author proposed a novel robust model predictive control scheme using neural networks. In the proposed structure, the neural network model is employed both to construct the nominal plant model and capture plant uncertainties, respectively. A pneumatic servomechanism example was successfully tested and validated using the proposed neural-network-based predictive control scheme. In [3], an artificial neural network model was developed based on data recorded by wave-monitoring instrumentation for assessing wave energy potential at a given site. The results obtained via simulation showed that the neural network can effectively utilize the measured data and is useful for characterizing the wave energy resources of coastal regions. Furthermore, a hybrid model combining an artificial neural network and a Bayesian network was developed for liquidity risk assessment in banking. A real-world case study was conducted to demonstrate its applicability and effectiveness [5]. In our previous work [7], a modeling scheme based on a Volterra system combined with a neural network was proposed for nonlinear discrete-time dynamic systems. To evaluate modeling performance, a discrete bilinear system and a nonlinear time-varying discrete system were considered. All numerical results demonstrated that the proposed method outperformed the standard back-propagation algorithm.
To address system-modeling or time-series-modeling problems, another key factor that must be considered is the selection of an appropriate updating mechanism for tuning the adjustable parameters within the developed model. In this study, the well-known particle swarm optimization (PSO) algorithm is adopted [12]. There are two advantages related to the PSO algorithm: real-valued representation and simple updating rules. It has been demonstrated to be an effective and efficient method for solving various engineering optimization problems, such as optimization of the stacking sequence of laminated composite materials [13], wireless network optimization in vertical handover scenarios [14], geophysical inverse problems [15], capacitated vehicle-routing problems [16], and heliostat field layout optimization [17], among others [18,19,20,21,22]. The effectiveness of the PSO algorithm has been validated through both numerical simulations and practical implementation.
This paper mainly focuses on time-series modeling. It is well known that a time series consists solely of a sequence of observable output data indexed according to time. Consequently, the structure of the corresponding mathematical model differs from that used in general system modeling, as no explicit input signals are involved. In the classical Volterra digital structure, the output is determined by the current input, past inputs, and their cross-product terms [23,24,25,26,27]. However, for time-series modeling, a modified Volterra digital filter is developed in which all input signals are replaced by the corresponding output signals. In addition, to enhance modeling capability, the proposed structure is further combined with a feedforward neural network, where the input vector consists of past output signals and their cross-product terms. Detailed descriptions are provided in the subsequent section.
The remainder of this paper is organized as follows. Section 2 presents the three model structures, including the classical Volterra digital filter, its modified version, and their integration with a feedforward neural network. Section 3 describes the PSO algorithm in detail and presents the complete design procedure for time-series modeling based on the proposed PSO-tuned neural network model. Section 4 considers two time-series modeling problems: chaotic time series and financial exchange rate series. Various experiments are conducted, including analyses of initial conditions, system orders, and population sizes. Finally, Section 5 presents conclusions and discusses possible future research directions.

2. The Proposed Neural Network Model

2.1. Classical Volterra Digital Filter

Digital filters are generally composed of two main categories: finite impulse response (FIR) and the infinite impulse response (IIR) filters. The output of the FIR is influenced only by the present and past input signals. As a result, such a system is always stable when the filter coefficients are assigned arbitrarily. Conversely, in the IIR structure, the output signal is dominated not only by the present and past inputs but also by its past output signals. The stability of an IIR filter must be guaranteed by carefully choosing a set of suitable filter coefficients. Basically, the classical Volterra digital filter is a nonlinear model that is an extended version of FIR filter, and it can be expressed via the following difference equation [23,26]:
y [ n ] = h [ 0 ] + k = 1 N h [ k ] x [ n k + 1 ] + k 1 = 0 N 1 k 2 = k 1 N 1 h [ k 1 , k 2 ] x [ n k 1 ] x [ n k 2 ] = h [ 0 ] + h [ 1 ] x [ n ] + h [ 2 ] x [ n 1 ] + + h [ N ] x [ n N + 1 ] + h [ 0 , 0 ] x 2 [ n ] + h [ 0 , 1 ] x [ n ] x [ n 1 ] + + h [ 0 , N 1 ] x [ n ] x [ n N + 1 ] + + h [ N 1 , N 1 ] x 2 [ n N + 1 ] ,
where x represents the input signal; y is the output signal; N represents the number of past inputs required, i.e., the system’s memory capacity; h [ k ] stands for the filter coefficient corresponding to the present and past inputs; and h [ k 1 , k 2 ] is also a filter coefficient, this time reflecting the quadratic property with respect to the cross-product terms of present and past inputs. It can easily be gleaned from Equation (1) that the Volterra digital filter is a nonlinear model and that its new output is influenced only by the input signals, not past output signals.

2.2. Modified Volterra Digital Filter

It is well known that a time series is a collection of output data correlated with time and that it does not involve input signals. The classical Volterra digital model mentioned above is unsuitable for solving the time-series modeling problem. Hence, based on the structure of Equation (1), in this paper, a modified Volterra digital filter is proposed, which is expressed as follows:
y [ n ] = h [ 0 ] + k = 1 N h [ k ] y [ n k ] + k 1 = 1 N k 2 = k 1 N h [ k 1 , k 2 ] y [ n k 1 ] y [ n k 2 ] = h [ 0 ] 1 + h [ 1 ] y [ n 1 ] + h [ 2 ] y [ n 2 ] + + h [ N ] y [ n N ] + h [ 1 , 1 ] y 2 [ n 1 ] + h [ 1 , 2 ] y [ n 1 ] y [ n 2 ] + + h [ 1 , N ] y [ n 1 ] y [ n N ] + + h [ N , N ] y 2 [ n N ] ,
where the past output signals completely substitute for the input signals of Equation (1). Such a modified Volterra version can also be regarded as a nonlinear autoregressive model in which the cross-product terms of past output signals are involved. In Equation (2), after a simple calculation, the number of filter coefficients can be obtained as follows:
L = ( N + 1 ) ( N + 2 ) 2 .
Moreover, in order to combine this model with the feedforward neural network, we collect all past outputs—and their cross-product terms—from Equation (2) as follows:
X = [ x 1 , x 2 , , x L ] = [ 1 , y [ n 1 ] , , y [ n N ] , y 2 [ n 1 ] , y [ n 1 ] y [ n 2 ] , , y 2 [ n N ] ] ,
where X represents the input vector for the neural network model.

2.3. Modified Volterra Neural Network Model

To enhance the modeling capacity for the time-series problem, a modified Volterra neural network model is developed in this study, where the vector X of Equation (4) is used as an input vector for the feedforward neural network. The proposed model, denoted as F N N L M 1 , is schematically shown in Figure 1, where L is the number of input neurons (and is also given by Equation (3)); M denotes the number of neurons in the hidden layer that is assigned by the designer, with only one output neuron; x i is the network input signal, as given by Equation (4); w _ x h i j represents the weight connecting from the ith neuron of the input layer to the jth neuron of the hidden layer; w _ h y j is the weight from the jth neuron of the hidden layer to the single output neuron, all for i = 1 , 2 , , L , and j = 1 , 2 , , M ; and y m is the final model output of the feedforward neural network.
In the hidden layer, the mathematical operation for each neuron can be described as follows:
n e t _ h j = i = 1 L w _ x h i j x i θ _ h j ,
h j = f ( n e t _ h j ) = e n e t _ h j e n e t _ h j e n e t _ h j + e n e t _ h j ,
where n e t _ h j , θ _ h j , and h j for j = 1 , 2 , , M represent the internal state, threshold, and output signal of the j-th hidden neuron, respectively, and f ( ) is the so-called activation function; here, the hyperbolic tangent function is employed. In addition, only one output neuron is utilized in the proposed model, and the mathematical operations are described by Equations (7) and (8):
n e t _ y = j = 1 M w h y j h j θ y ,
y m = f n e t y = n e t y ,
where n e t _ y and θ _ y represent the internal state and the threshold of the output neuron, respectively, and y m is the final output of the proposed model, F N N L M 1 , obtained via the linear activation function of Equation (8). To deal with system modeling, it is expected that the model output y m can approximate the actual series output y s e r i e s for any instant by adjusting all the weights and thresholds within the neural network model. Under the architecture of F N N L M 1 , the total of all the adjustable parameters can easily be calculated:
T = L × M + M + M + 1 = M ( L + 2 ) + 1 .
It is clear that the total T is much more than the L in Equation (3) for the modified Volterra digital filter. In general, adding more adjustable parameters to a model always results in better modeling capacity. This is the main advantage of combining the modified Volterra digital filter with the feedforward neural network. Furthermore, the excellent PSO algorithm is adopted to design the model parameters in this study. Thus, we set
Θ = [ θ 1 , θ 2 , , θ T ] = [ w _ x h 11 , , w _ x h 1 M , w _ x h 21 , , w _ x h 2 M , , , w _ x h L 1 , , w _ x h L M , θ _ h 1 , , θ _ h M , w _ h y 1 , , w _ h y M , θ _ y ] ,
where Θ is the so-called particle of the algorithm, which consists of a collection of all the model parameters of F N N L M 1 . Furthermore, a large number of particles form a population, and some evolving mechanisms are employed to achieve optimization.

3. Design Steps for Time-Series Modeling

The PSO algorithm was proposed by Kennedy and Eberhart in 1995 [12], and it is a popular and effective algorithm for solving different kinds of optimization problems. In the PSO algorithm, two important factors need to be considered: the individual best, pbest, for each particle and the global best, gbest, for the whole population. pbest is the best individual particle from the first iteration to the present, and gbest is the global best particle of the whole population until the present iteration. These two factors significantly guide the movement of all the particles toward the optimal or near-optimal system solution. Only two updating formulas are used to achieve optimization. One is the velocity-updating formula, and the other is the position formula, which can be described by Equations (11) and (12), respectively, for i = 1 , 2 , , P S and j = 1 , 2 , , T , where PS represents population size:
v i j ( k + 1 ) = w v i j ( k ) + c 1 r 1 ( p b e s t i j ( k ) θ i j ( k ) ) + c 2 r 2 ( g b e s t j ( k ) θ i j ( k ) ) ,
θ i j ( k + 1 ) = θ i j ( k ) + v i j ( k + 1 ) ,
where θ i j ( k ) , p b e s t i j ( k ) , and g b e s t j ( k ) are the jth position components of the ith particle, the ith individual best, and the global best, respectively, in the kth iteration; v i j ( k + 1 ) and θ i j ( k + 1 ) are the new jth velocity and position components of the ith particle; w is here called the inertia weight and is used to control the global and local search; c 1 and c 2 are two positive constants given by the designer; and r 1 and r 2 are two random numbers uniformly chosen from the interval [0, 1].
On the other hand, for time-series modeling, an objective function or a cost function of the system should be previously defined to evaluate the performance of each particle. Here, a simple mean square error (MSE) function was chosen, namely, Equation (13),
M S E = 1 N S n = 1 N S e 2 [ n ] = 1 N S n = 1 N S y s e r i e s n y m n 2 ,
where NS represents the number of time-series data for modeling, y s e r i e s [ n ] is the nth actual sample of time-series data, and y m [ n ] is the nth output sample of the modified Volterra neural network model, as described in Equation (8), all for n = 1 , 2 , , N S . It is expected that the MSE can be minimized via the tuning of the PSO algorithm; that is, the model output y m [ n ] approaches the actual output y s e r i e s [ n ] for any instant n. This results in time-series modeling. As a summary, a design flowchart for time-series modeling is displayed in Figure 2 based on the modified neural network model with PSO tuning. Furthermore, the corresponding design steps are also listed.
  • Required Data: A modified Voterra neural network model (as described previously), a sequence of known time-series data y s e r i e s [ n ] , the inertia weight w and positive constants c 1 and c 2 from Equation (11), the total of time-series data NS from Equation (13), the number of particles (population size) PS, and the allowable number of iterations G.
  • Goal: The proposed modified Volterra neural network must successfully model a sequence of time-series data using PSO algorithm tuning.
  • Step 1. Generate an initial population with PS particles that are produced from the interval [ 1 , 1 ] randomly.
  • Step 2. Check the number of algorithm iterations. If the number is greater than G, then the algorithm must stop; otherwise, Step 3 is performed.
  • Step 3. Evaluate the MSE of each particle using Equation (13), and based on the derived value, record the individual best, pbest, for each particle and the global best, gbest, for the whole population.
  • Step 4. For each particle, employ the velocity- and position-updating formulas using Equations (11) and (12), respectively, to obtain new particle positions.
  • Step 5. Go back to Step 2.

4. Some Simulations

To reveal the applicability and effectiveness of the developed neural network model with PSO tuning, two different kinds of time-series were simulated: chaotic series and financial exchange rate series. In the following, the fixed variables used in the algorithm are given by the number of neurons in the hidden layer ( M = 10 ), inertia weight ( w = 0.8 ), positive constants ( c 1 = 1.2 and c 2 = 1.2 ) in Equation (11), and the allowable number of iterations ( G = 2000 ). The first experiment focuses on the influence of system order N on modeling performance, and the second experiment pertains to explaining the influence of the population size of the algorithm. For each case, 20 independent runs with different sets of initial conditions were performed to verify the robustness of the proposed scheme.

4.1. Chaotic Series Modeling

In this experiment, a simple chaotic system named the logistic map was employed to generate a sequence of chaotic data. The dynamic equation can be expressed using Equation (14) [28]:
y [ n ] = μ y [ n 1 ] [ 1 y [ n 1 ] ] ,
where 0 μ 4 is the control parameter of the chaotic system. When the control parameter μ = 4 and the initial value y [ 0 ] { 0 , 0.25 , 0.5 , 0.75 , 1 } , the above equation is deterministic and displays chaotic dynamics. Furthermore, the output signal y [ n ] is distributed in the range ( 0 , 1 ) provided the initial value y [ 0 ] ( 0 , 1 ) . For the simulations, y [ 0 ] = 0.8 , and based on this value, a sequence of 100 chaotic data, i.e., N S = 100 , is generated for modeling. Two different cases are simulated: N = 5 and N = 7 . It can easily be determined from Equations (3) and (9) that L = 21 and L = 36 and, further, T = 231 and T = 381 , respectively. Obviously, the case where N = 7 has many more model-adjustable parameters than the case where N = 5 . In general, modeling performance improves when there are more model parameters. In the simulation, the population size was set as follows: P S = 20 . After the proposed design method was employed, some simulation results were plotted, as shown in Figure 3 and Figure 4 and listed in Table 1, respectively. Table 1 provides a numerical comparison of 20 different runs for two cases: N = 5 and N = 7 . Table 1 shows that the best modeling solution occurs in Run 19 for the case where N = 5 and in Run 17 for N = 7 , and their MSE values are 2.735 × 10−3 and 9.978 × 10−4, respectively. Furthermore, the numerical means resulting from executing 20 different runs can eventually be derived as 3.697 × 10−3 and 2.407 × 10−3 for the cases where N = 5 and N = 7 . These results clearly show that modeling performance improves when more adjustable parameters are given. Figure 3a and Figure 4b show comparisons between the real chaotic series output (dotted line) and model series output (solid line) with respect to the time n for the best simulation runs for N = 5 and N = 7 , respectively. In addition, the performance index R squared (R2) for the modeling problem was determined: R2 = 0.978860556 and R2 = 0.992283987, respectively. The error signals with respect to time step n are plotted in Figure 3b and Figure 4b, respectively. As they show, both curves of the real and modeling outputs are almost identical. These results fully confirm the feasibility and superiority of the proposed design scheme.

4.2. Financial Exchange Rate Series Modeling

The second experiment involved modeling a sequence of physical financial exchange rate data corresponding to Japanese and Taiwanese currencies. All exchange rate series for the entire year of 2016 were collected, and there are 247 data corresponding to each day (taken from the Bank of Taiwan website: http://rate.bot.com.tw/). We sought to apply our method to model these time-series data. In this experiment, the influence of population size on modeling performance was examined by setting P S = 20 and P S = 30 . The system order of N = 5 was used for simulations. As in the above experiment, 20 independent runs with different initial conditions were performed for each case to verify robustness. All the numerical results obtained after the proposed method was applied are listed in Table 2, where the means of the derived MSE values are 8.808 × 10−6 and 6.581 × 10−6 for the cases P S = 20 and P S = 30 , respectively. These results show that the modeling performance for P S = 30 is slightly superior to that for P S = 20 . In addition, the best solution among 20 runs for the case where P S = 20 occurs in Run 12 and in Run 16 for P S = 30 . Their MSE values are 5.12 × 10−6 and 4.92 × 10−6. Figure 5a further compares the physical exchange rate output and modeling output with respect to the whole year of 2016 for Run 12 for P S = 20 . In this case, the performance index R2 was found to be 0.964350819. Figure 5a clearly shows that these two curves are very close. Also, the best modeling result for Run 16 for P S = 30 is shown in Figure 6a, and the value of R2 obtained was 0.965712804. In addition, Figure 5b and Figure 6b show their error signal curves. It can be concluded that the proposed scheme achieved a satisfactory outcome in modeling the exchange rate between currencies.

5. Conclusions

In this study, a novel modeling architecture for time-series data was developed. The proposed scheme modifies the classical Volterra digital filter to meet the requirements of time-series modeling. Based on this modified structure, a feedforward neural network model was constructed, where the input vector consists of past output values and their cross-product terms. This formulation introduces additional adjustable parameters, thereby enhancing modeling capability. An effective particle swarm optimization (PSO) algorithm was then employed to estimate the model parameters so that the network output could approximate the actual time-series output as closely as possible. In addition, the complete design procedure and a schematic flowchart were provided. To verify the applicability of the proposed method, two types of time-series modeling problems were considered: chaotic time series and financial exchange rate series. For each case, multiple independent runs were conducted to evaluate the robustness of the algorithm. Moreover, the effects of system order and population size on modeling performance were investigated. It can be concluded from the simulation results that the proposed neural network model combined with PSO tuning is effective for time-series modeling problems.

Funding

This work was not funded by any grants or external funding sources.

Institutional Review Board Statement

This work did not involve any studies involving human participants or animals.

Data Availability Statement

All numerical data in this paper can be directly generated from the considered system equations with the proposed methods described and the cited work by means of using the programming language C.

Conflicts of Interest

The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Patan, K. Two stage neural network modeling for robust model predictive control. ISA Trans. 2018, 72, 56–65. [Google Scholar] [CrossRef] [PubMed]
  2. Kim, K.K.; Patron, E.R.; Braatz, R.D. Standard representation and unified stability analysis for dynamic artificial neural network models. Neural Netw. 2018, 98, 251–262. [Google Scholar] [CrossRef] [PubMed]
  3. Sanchez, A.S.; Rodrigues, D.A.; Fontes, R.M.; Martins, M.F.; Kalid, R.d.A.; Torres, E.A. Wave resource characterization through in-situ measurement followed by artificial neural networks’ modeling. Renew. Energy 2018, 115, 1055–1066. [Google Scholar] [CrossRef]
  4. Oliveira, J.J. Global exponential stability of nonautonomous neural network models with unbounded delays. Neural Netw. 2017, 96, 71–79. [Google Scholar] [CrossRef]
  5. Tavana, M.; Abtahi, A.R.; Debora, D.C.; Poortarigh, M. An artificial neural network and Bayesian network model for liquidity risk assessment in banking. Neurocomputing 2018, 275, 2525–2554. [Google Scholar] [CrossRef]
  6. Wu, Y.C.; Yin, F.; Liu, C.L. Improving handwritten Chinese text recognition using neural network language models and convolutional neural network shape models. Pattern Recognit. 2017, 65, 251–264. [Google Scholar] [CrossRef]
  7. Yang, Y.S.; Chang, W.D.; Liao, T.L. Volterra system-based neural network modeling by particle swarm optimization approach. Neurocomputing 2012, 82, 179–185. [Google Scholar] [CrossRef]
  8. Sorrosal, G.; Irigoyen, E.; Borges, C.E.; Martin, C.; Macarulla, A.M.; Alonso-Vicario, A. Artificial neural network modeling of the bioethanol-to-olefins process on a HZSM-5 catalyst treated with alkali. Appl. Soft Comput. 2017, 58, 648–656. [Google Scholar] [CrossRef]
  9. Tian, C.; Ji, Y. Filtering-based three-stage Levenberg–Marquardt iterative identification of dual-rate Volterra-Wiener nonlinear systems with colored noise. ISA Trans. 2026, 171, 17–28. [Google Scholar] [CrossRef]
  10. Deng, T.; Lu, L.; Lei, T.; Chen, B. Fixed-point fully adaptive interpolated Volterra filter under recursive maximum correntropy. Signal Process. 2025, 236, 110055. [Google Scholar] [CrossRef]
  11. Chen, Z.; Zhang, Y.; Zhang, H. Second-order Volterra Adaptive Filtering Based on Natural Gradient Descent. In 2025 International Conference on New Trends in Computational Intelligence (NTCI); IEEE: Jinan, China, 2025; pp. 385–389. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of ICNN’95—International Conference on Neural Networks; IEEE: Perth, Australia, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  13. Javidrad, F.; Nazari, M.; Javidrad, H.R. Optimum stacking sequence design of laminates using a hybrid PSO-SA method. Compos. Struct. 2018, 185, 607–618. [Google Scholar] [CrossRef]
  14. Goudarzi, S.; Hassan, W.H.; Anisi, M.H.; Soleymani, A.; Sookhak, M.; Khan, M.K.; Hashim, A.A.; Zareei, M. ABC-PSO for vertical handover in heterogeneous wireless networks. Neurocomputing 2017, 256, 63–81. [Google Scholar] [CrossRef]
  15. Godio, A.; Santilano, A. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm. J. Appl. Geophys. 2018, 148, 163–174. [Google Scholar] [CrossRef]
  16. Hannan, M.A.; Akhtar, M.; Begum, R.A.; Basri, H.; Hussain, A.; Scavino, E. Capacitated vehicle-routing problem model for scheduled solid waste collection and route optimization using PSO algorithm. Waste Manag. 2018, 71, 31–41. [Google Scholar] [CrossRef]
  17. Li, C.; Zhai, R.; Liu, H.; Yang, Y.; Wu, H. Optimzation of a heliostat field layout using hybrid PSO-GA algorithm. Appl. Therm. Eng. 2018, 128, 33–41. [Google Scholar] [CrossRef]
  18. Atashpendar, A.; Dorronsoro, B.; Danoy, G.; Bouvry, P. A scalable parallel cooperative coevolutionary PSO algorithm for multi-objective optimization. J. Parallel Distrib. Comput. 2018, 112, 111–125. [Google Scholar] [CrossRef]
  19. Xu, J.; Guo, C.; Zhang, H. Joint channel allocation and power control based on PSO for cellular networks with D2D communications. Comput. Netw. 2018, 133, 104–119. [Google Scholar] [CrossRef]
  20. Zhang, J.; Xia, P. An improved PSO algorithm for parameter identification of nonlinear dynamic hysteretic models. J. Sound Vib. 2017, 389, 153–167. [Google Scholar] [CrossRef]
  21. Ghodousian, A.; Parvari, M.R. A modified PSO algorithm for linear optimization problem subject to the generalized fuzzy relational inequalities with fuzzy constraints (FRI-FC). Inf. Sci. 2017, 418–419, 317–345. [Google Scholar] [CrossRef]
  22. Pal, D.; Chatterjee, A.; Rakshit, A. Robust-stable quadratic-optimal fuzzy-PDC controllers for systems with parametric uncertainties: A PSO based approach. Eng. Appl. Artif. Intell. 2018, 70, 38–51. [Google Scholar] [CrossRef]
  23. Zhang, J.; Zhao, H. A novel adaptive bilinear filter based on pipelined architecture. Digit. Signal Process. 2010, 20, 23–38. [Google Scholar] [CrossRef]
  24. Contan, C.; Kirei, B.S.; Topa, M.D. Modified NLMF adaptation of Volterra filters used for nonlinear acoustic echo cancellation. Signal Process. 2013, 93, 1152–1161. [Google Scholar] [CrossRef]
  25. Ji, W.; Gan, W.S. Identification of a parametric loudspeaker system using an adaptive Volterra filter. Appl. Acoust. 2012, 73, 1251–1262. [Google Scholar] [CrossRef]
  26. Chang, W.D. Volterra filter modeling of nonlinear discrete-time system using improved particle swarm optimization. Digit. Signal Process. 2012, 22, 1056–1062. [Google Scholar] [CrossRef]
  27. Zhang, J.; Pang, Y. Pipelined robust M-estimate adaptive second-order Volterra filter against impulsive noise. Digit. Signal Process. 2014, 26, 71–80. [Google Scholar] [CrossRef]
  28. Yuan, X.; Cao, B.; Yang, B.; Yuan, Y. Hydrothermal scheduling using chaotic hybrid differential evolution. Energy Convers. Manag. 2008, 49, 3627–3633. [Google Scholar] [CrossRef]
Figure 1. The proposed model F N N L M 1 .
Figure 1. The proposed model F N N L M 1 .
Electronics 15 02086 g001
Figure 2. A complete design flowchart for modeling.
Figure 2. A complete design flowchart for modeling.
Electronics 15 02086 g002
Figure 3. (a) Best modeling result for Run 19 for N = 5 . (b) Error signal for Run 19 for N = 5 .
Figure 3. (a) Best modeling result for Run 19 for N = 5 . (b) Error signal for Run 19 for N = 5 .
Electronics 15 02086 g003
Figure 4. (a) Best modeling result for Run 17 for N = 7 . (b) Error signal for Run 17 for N = 7 .
Figure 4. (a) Best modeling result for Run 17 for N = 7 . (b) Error signal for Run 17 for N = 7 .
Electronics 15 02086 g004
Figure 5. (a) Best modeling result for Run 12 for P S = 20 . (b) Error signal for Run 12 for P S = 20 .
Figure 5. (a) Best modeling result for Run 12 for P S = 20 . (b) Error signal for Run 12 for P S = 20 .
Electronics 15 02086 g005
Figure 6. (a) Best modeling result for Run 16 for P S = 30 . (b) Error signal for Run 16 for P S = 30 .
Figure 6. (a) Best modeling result for Run 16 for P S = 30 . (b) Error signal for Run 16 for P S = 30 .
Electronics 15 02086 g006
Table 1. MSE results for two different cases: N = 5 and N = 7 (* indicates the best solution among 20 different runs).
Table 1. MSE results for two different cases: N = 5 and N = 7 (* indicates the best solution among 20 different runs).
N = 5
(L = 21 and T = 231)
N = 7
(L = 36 and T = 381)
Run 13.204 × 10−31.972 × 10−3
Run 23.742 × 10−33.432 × 10−3
Run 33.967 × 10−32.167 × 10−3
Run 43.489 × 10−32.788 × 10−3
Run 54.011 × 10−31.701 × 10−3
Run 63.923 × 10−31.573 × 10−3
Run 73.181 × 10−34.883 × 10−3
Run 84.503 × 10−33.706 × 10−3
Run 93.18 × 10−32.456 × 10−3
Run 102.815 × 10−32.32 × 10−3
Run 113.414 × 10−31.151 × 10−3
Run 123.326 × 10−31.537 × 10−3
Run 135.015 × 10−32.662 × 10−3
Run 144.802 × 10−31.736 × 10−3
Run 153.787 × 10−31.677 × 10−3
Run 163.669 × 10−31.541 × 10−3
Run 172.961 × 10−39.978 × 10−4 *
Run 184.049 × 10−34.782 × 10−3
Run 192.733 × 10−3 *2.77 × 10−3
Run 204.177 × 10−32.284 × 10−3
Mean3.697 × 10−32.407 × 10−3
Variance3.73981 × 10−71.11854 × 10−6
Standard deviation6.115 × 10−41.057 × 10−3
95% Confidence interval[3.429 × 10−3, 3.965 × 10−3][1.943 × 10−3, 2.87 × 10−3]
Table 2. MSE results for two different cases: P S = 20 and P S = 30 (* indicates the best solution among 20 different runs).
Table 2. MSE results for two different cases: P S = 20 and P S = 30 (* indicates the best solution among 20 different runs).
P S = 20 P S = 30
Run 11.279 × 10−57.26 × 10−6
Run 21.706 × 10−55.14 × 10−6
Run 36.67 × 10−66.07 × 10−6
Run 45.41 × 10−66.15 × 10−6
Run 57.15 × 10−66.44 × 10−6
Run 67.96 × 10−66.24 × 10−6
Run 75.51 × 10−65.42 × 10−6
Run 89.08 × 10−66.77 × 10−6
Run 98.46 × 10−66.73 × 10−6
Run 106.32 × 10−67.7 × 10−6
Run 111.395 × 10−59.51 × 10−6
Run 125.12 × 10−6 *1.034 × 10−5
Run 139.13 × 10−65.72 × 10−6
Run 146.56 × 10−66.15 × 10−6
Run 158.31 × 10−69.24 × 10−6
Run 165.34 × 10−64.92 × 10−6 *
Run 171.974 × 10−55.93 × 10−6
Run 186.76 × 10−65.07 × 10−6
Run 197.3 × 10−65.22 × 10−6
Run 207.54 × 10−65.61 × 10−6
Mean8.808 × 10−66.581 × 10−6
Variance1.529 × 10−112.237 × 10−12
Standard deviation3.911 × 10−61.495 × 10−6
95% Confidence interval[7.094 × 10−6, 1.052 × 10−5][5.925 × 10−6, 7.237 × 10−6]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chang, W.-D. Time-Series Modeling Based on a Modified Volterra Neural Network. Electronics 2026, 15, 2086. https://doi.org/10.3390/electronics15102086

AMA Style

Chang W-D. Time-Series Modeling Based on a Modified Volterra Neural Network. Electronics. 2026; 15(10):2086. https://doi.org/10.3390/electronics15102086

Chicago/Turabian Style

Chang, Wei-Der. 2026. "Time-Series Modeling Based on a Modified Volterra Neural Network" Electronics 15, no. 10: 2086. https://doi.org/10.3390/electronics15102086

APA Style

Chang, W.-D. (2026). Time-Series Modeling Based on a Modified Volterra Neural Network. Electronics, 15(10), 2086. https://doi.org/10.3390/electronics15102086

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop