Next Article in Journal
Dynamic Sliding Mode Formation Control of Unmanned Surface Vehicles Under Actuator Failure
Previous Article in Journal
Comparative Analysis of Amino Acid Composition, Fatty Acid Profiles, and Genetic Diversity Among Three Populations of Penaeus semisulcatus
Previous Article in Special Issue
Long-Term (1979–2024) Variation Trend in Wave Power in the South China Sea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis, Forecasting, and System Identification of a Floating Offshore Wind Turbine Using Dynamic Mode Decomposition

CNR-INM, National Research Council-Institute of Marine Engineering, Via di Vallerano, 139, 00128 Rome, Italy
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(4), 656; https://doi.org/10.3390/jmse13040656
Submission received: 14 February 2025 / Revised: 16 March 2025 / Accepted: 19 March 2025 / Published: 25 March 2025
(This article belongs to the Special Issue Advances in Offshore Wind and Wave Energies—2nd Edition)

Abstract

:
This article presents the data-driven equation-free modeling of the dynamics of a hexafloat floating offshore wind turbine based on the application of dynamic mode decomposition (DMD). The DMD has here been used (i) to extract knowledge from the dynamic system through its modal analysis, (ii) for short-term forecasting (nowcasting) from the knowledge of the immediate past of the system state, and (iii) for system identification and reduced-order modeling. All the analyses are performed on experimental data collected from an operating prototype. The nowcasting method for motions, accelerations, and forces acting on the floating system applies Hankel-DMD, a methodological extension that includes time-delayed copies of the states in an augmented state vector. The system identification task is performed by using Hankel-DMD with a control (Hankel-DMDc), which models the system as externally forced. The influence of the main hyperparameters of the methods is investigated with a full factorial analysis using error metrics analyzing complementary aspects of the prediction. A Bayesian extension of the Hankel-DMD and Hankel-DMDc is introduced by considering the hyperparameters as stochastic variables, enriching the predictions with uncertainty quantification. The results show the capability of the approaches for data-lean nowcasting and system identification, with computational costs being compatible with real-time applications. Accurate predictions are obtained up to 4 wave encounters for nowcasting and 20 wave encounters for system identification, suggesting the potential of the methods for real-time continuous-learning digital twinning and surrogate data-driven reduced-order modeling.

1. Introduction

In the efforts to contain the global temperature increase under 2 °C below pre-industrial levels, as set by the Paris Agreement [1], most countries have committed to reaching the goal of net zero emissions by 2050, meaning that all their greenhouse gas emissions must be counterbalanced by an equal amount of removals from the atmosphere. To reach this critical and ambitious task for sustainable growth, the decarbonization of our society is a key aspect that passes through the decarbonization of energy production [2,3]. The shift from fossil fuels to renewable sources for power production is considered the fundamental step in the process. Power generation is, in fact, responsible for 30% of the global carbon dioxide emissions at the moment. In 2018, the European Union (EU) set intermediate targets of 20% of energy obtained from renewable resources by 2020 and 32% by 2030, and the latter has been raised to 42.5% (with the aspiration of reaching 45%) by amending the Renewable Energy Directive in 2021 [4]. Reaching the mentioned targets means almost doubling the existing share of renewable energy in the EU.
Wind energy technology has been identified as one of the most promising ones, along with photovoltaic, for power production from renewable sources. Several growth scenarios predict a prominent role of wind power, exceeding the 35% share of the total electricity demand by 2050 [5] which represents a nearly nine-fold rise in the wind power share in the total generation mix compared to 2016 levels. Offshore wind energy production has a bigger growth potential compared to its onshore counterpart. The reasons are connected to fewer technical, logistic, and social restrictions of the former. Offshore installed turbines may exploit abundant and more consistent winds, which would be helped by the reduced friction of the sea surface and the absence of surrounding hills and buildings [6]. In addition, offshore wind farms benefit from greater social acceptance, a minor value of their occupied space, and the possibility of installing larger turbines with fewer transportation issues than onshore [7,8,9]. The 2023 Global Offshore Wind Report predicts the installation of more than 380 GW of offshore wind capacity worldwide in the next ten years [10].
The exponential growth of the sector passes through the possibility of realizing floating offshore plants, enabling the exploitation of sea areas with deeper water that make fixed-foundation turbines not a feasible/affordable solution (indicatively deeper than 60 m). The main advantage of exploiting deep offshore sea areas relies on the abundant and steady winds characterizing them. One of the main limiting factors in the reduction of the levelized cost of energy (LCOE) of advanced floating offshore wind turbines (FOWTs) is the current size and cost of their platforms. Its reduction, alongside the development of advanced moorings, improved control systems, and maintenance procedures, is among the most impacting technical goals research activities are focusing on.
The power production by FOWTs presents additional challenges compared to their fixed wind turbine counterpart (on- or offshore) which are inherent to their floating characteristic that adds six degrees of freedom to the structures. Nonlinear hydrodynamic loads, wave–current interactions, aero-hydrodynamics coupling producing negative aerodynamic damping, and wind-induced low-frequency rotations are among the main causes of large amplitude motions of the platform. These are in turn causes of reductions in the average power output of the power plant and increases in the fluctuations of the produced power. Both the quantity and quality of the power production are affected, and the structure and all the components (blades, cables, bearings, etc.) also suffer increased fatigue-induced wear from nonconstant loadings [11] (about 20% of operation and maintenance costs come from blade failures [12], and almost 70% of the gearbox downtime is due to bearing faults [13]). The floating wind turbine operations are considerably altered by the stochastic nature of the wind, waves, and currents in the sea environment, which excite platform motion, leading to uncertainties in structural loads and power extraction capability. As shown first in Jonkman [14], waves are responsible for a large part of the dynamic excitation of an FOWT. Rotor speed fluctuations are 60% larger when the same turbine is operated in a floating environment compared to onshore installation, and the difference has been shown to increase with increasing wave conditions. Therefore, it is essential to develop appropriate strategies to improve FOWTs’ platform stability and maximize their turbines’ energy conversion rate, achieving a better LCOE with higher power production and lower maintenance operation costs.
Both passive and active technologies have been studied and developed within this scope, such as tuned mass dampers [15,16,17] mounted on different floaters, ballasted buoyancy cans [18,19,20], gyro-stabilizers [21], and blade pitch and/or torque controllers [7,8,9,22,23]. Developing advanced control systems is a high-potential cost reduction strategy for offshore wind turbines that impacts multiple levels: effective control strategies may increase the energy production, which has a direct impact on the LCOE; a reduction in platform motions may help in reducing their sizes and costs; reduced vibratory loads on the turbine’s components help increase their lifetime and reduce maintenance costs. A thorough comparison of various controllers designed for managing vibratory loads is provided in Awada et al. [24]. Both feedback [7,8,25,26,27,28,29,30,31] and feedforward [32,33,34,35,36,37,38,39,40,41] control systems have been successfully developed for the effective control of FOWTs. The two philosophies may also be effectively coupled, creating a feedforward control with a feedback loop such as in Al et al. [42], where real-time prediction of the free-surface elevation was exploited to compensate for the wave disturbances on the FOWT. Feedback algorithms may take advantage of accurate models of the controlled systems for improved performance; at the same time, feedforward controllers rely on forecasting techniques to estimate upcoming disturbances or changes in the state to be controlled. Hence, the predictive algorithm plays a primary role in the success of the control strategy.
Although white- and gray-box models for the FOWT dynamics can be obtained [43,44], the modeling process can be extremely complex, and clearly representing the operational status of the turbine and platform under random meteorological conditions is not trivial. Recently, data-driven methods have demonstrated to be a powerful alternative for the identification of dynamic systems and the forecasting of their response. In particular, advanced machine learning and deep learning algorithms have been successfully applied to predict FOWT motions and loads. Several examples can be found in the literature. A multilayer feedforward neural network was used in Wang et al. [45] to predict maximum blade and tower loadings and maximum mooring line tension using the wind speed, turbulence intensity, meaningful wave height, and spectral peak period as the model’s input parameters. Zhang et al. [46] built a data-driven prediction model for the FOWT output power, the platform pitch angle, and the blades’ flapwise moment at the root using wind speed, wave height, and blade pitch control variables as inputs by training a gated recurrent neural network (GRNN). The study in Barooni and Velioglu Sogut [47] adopted a convolutional neural network merged with a GRNN to forecast the dynamic behavior of FOWTs. Long short-term memory (LSTM) networks are the subject of the studies in Gräfe et al. [48], where the fairlead tension, surge, and pitch motions were predicted with a data-driven method, including onboard sensors measurements and lidar inflow data. A self-attention method was integrated with LSTM in Deng et al. [49] to improve the accuracy in predicting the motion response of an FOWT in wind–wave coupled environments. The hybridization of LSTM with empirical mode decomposition (EMD) was studied in Ye et al. [50], where the neural network was used to predict the subcomponents of the EMD process for the short-term prediction of the motions of a semi-submersible platform, and in Song et al. [51], where the authors studied FOWT motion response prediction under different sea states.
Albeit powerful, machine learning and deep learning methods typically require large training datasets, comprising a range of operating conditions needing to be as complete as possible to learn patterns that generalize to new situations. In addition, the training of such algorithms can be computationally expensive and typically not compatible with real-time learning and digital twinning [52], where the system characteristics and responses to external perturbations also change as the system ages.
Dynamic mode decomposition (DMD) offers an interesting alternative for data-driven and equation-free modeling [53,54,55]. The method is based on the Koopman operator theory, an alternative formulation of the dynamical systems theory that provides a versatile framework for the data-driven study of high-dimensional nonlinear systems [56]. DMD builds a reduced-order linear model of a dynamical system, approximating the Koopman operator. The approach requires no specific knowledge or assumption about the system’s dynamics and can be applied to both empirical and simulated data. The model is obtained with a direct procedure from a small set of multidimensional input–output pairs, which constitutes, from a machine learning perspective, the training phase.
The literature features several methodological extensions of the original DMD algorithm aimed at improving the accuracy of the decomposition and, more generally, broadening the method’s capabilities. Particularly relevant to this study are the Hankel-DMD [55,56,57,58,59], the DMD with control (DMDc) [60,61], and their combination in the Hankel-DMD with control (Hankel-DMDc) [62,63]. The Hankel-DMD, also referred to as Augmented-DMD [59], Time Delay Coordinates Extended DMD [62], and Time Delay DMD [64], has proven to be a powerful tool for enhancing the linear model’s ability to capture significant features of nonlinear and chaotic dynamics. This is achieved by extending the system state with time-delayed copies, which, in the limit of infinite-time observations, yields the true Koopman eigenfunctions and eigenvalues [56]. For instance, it has been effectively used in Dylewsky et al. and Mohan et al. [64,65] to predict the short-term evolution of electric loads on the grid. The DMDc extends the DMD framework to handle externally forced systems, allowing for the separation of the system’s free response from the effects of external inputs. Notable applications include Al-Jiboory [66], wherein they developed a novel real-time control technique for unmanned aerial quadrotors using DMDc, enabling the control system to adapt promptly to environmental or system behavior change. Another example is Dawson et al. [67], where DMDc was applied to simulation data to create a reduced-order model (ROM) of the forces acting on a rapidly pitching airfoil. In Brunton et al. [62], the algorithmic variant combining control and state augmentation with time-delayed copies is called Time Delay Coordinates DMDc, and it was introduced using the same number of delayed copies for both the state and the input. Similarly, Zawacki and Abed [63] introduced the DMD with Input-Delayed Control, which, as the name suggests, includes time-delayed copies of the inputs only.
The data-driven nature, noniterative training process, and data efficiency of DMD have contributed to its widespread adoption as a reduced-order modeling technique and real-time forecasting tool in various fields. These include fluid dynamics and aeroacoustics [53,68,69,70,71], epidemiology [72], neuroscience [73], and finance [74], among others. Several studies have demonstrated the effectiveness of DMD in forecasting complex system behaviors in the marine environment, such as Diez et al. [75], where DMD was applied for the forecasting of ship trajectories motions and forces. Serani et al. [59] conducted a statistical evaluation of DMD’s predictive performance for naval applications, incorporating state augmentation techniques such as augmenting the system state with its derivatives and time-delayed copies. Diez et al. [76] provides a comparative analysis for naval application of DMD-based prediction algorithms and various neural network architectures, including standard and bidirectional long short-term memory networks, gated recurrent units, and feedforward neural networks. Furthermore, Diez et al. [77] studied the hybridization of DMD with artificial neural networks to enhance prediction accuracy.
The objective of this paper is to propose the use of DMD and its variants to extract knowledge and forecast motions and loads, as well as perform system identification of an FOWT from experimental data. In particular, Hankel-DMD is used as a data-lean forecasting method, producing short-term forecasting (nowcasting) from the immediate past history of the system state, with a continuously learning data-driven reduced-order model suitable for digital twinning and real-time predictions. On the other side, Hankel-DMDc is applied as an effective and efficient approach to model-free system identification, aiming to create an accurate ROM for the long-term prediction of the platform motions and loads from the knowledge of the wave elevation in the proximity of the platform and the wind speed. In this work, the nowcasting and system identification tasks are performed on experimentally measured data; however, the Hankel-DMD and Hankel-DMDc methods here developed also directly apply to different data sources such as simulations of various fidelity levels. The effect on the predictions of the main hyperparameters of the methods are studied with a full-factorial design of experiment, assessing the performances using three error metrics and identifying the most promising configurations. In addition, novel Bayesian extensions of Hankel-DMD and Hankel-DMDc are introduced to include uncertainty quantification in the methods’ predictions by considering their hyperparameters as stochastic variables. The stochastic hyperparameter variation ranges are identified after deterministic analyses, and the results from the deterministic and Bayesian methods were compared using the same test sequences.
The two approaches, nowcasting and system identification, are of paramount relevance for developing advanced controllers and digital twins for the FOWT, combining accuracy, adaptivity, and reliability (through uncertainty quantification), with computational costs compatible with real-time execution.The efficiency of the DMD-based methods is provided by the small dimensionality of the relevant state variables and the low computational cost required for both the model construction (training) and exploitation (prediction) as opposed to more data- and resource-intensive machine learning methods.
The methods are applied to real-life measured data obtained by various sensors mounted on a scale prototype of a 5MW Hexafloat FOWT, which is the first of its type. The experimental activity has been conducted as part of the National Research Project RdS-Electrical Energy from the Sea funded by the Italian Ministry for the Environment (MaSE) and coordinated by CNR-INM.
The paper is organized as follows. Section 2 presents the wind turbine test case, details the DMD methods applied, and introduces the performance metrics used to assess the predictive performances of the algorithms. The numerical setup and the data preprocessing are described in Section 4. Section 5 collects the results from the modal analysis and the forecasting of the quantities of interest obtained with the deterministic Hankel-DMD and and its Bayesian version for nowcasting, and finally, conclusions about the conducted analyses are given in Section 6.

2. Material and Methods

2.1. Wind Turbine Test Case

The presented analyses are conducted on a set of experimental data collected on the prototype of an FOWT built and tested at the Interdisciplinary Marine Renewable Energy Sea Lab (In-MaRELab); see Figure 1a.
The tests were conducted offshore the Naples port, right in front of the breakwater Molo San Vincenzo. The prototype and the experimental activity at sea are part of the National Research Project RdS-Electrical Energy from the Sea funded by the MaSE and coordinated by CNR-INM.
The FOWT is a 1:6.8 scale prototype of a 5MW FOWT, which is the first one existing at sea for the Hexafloat concept. The floater is a Saipem patented lightweight semisub platform consisting of a hexagonal tubular steel structure around a central column and a deeper counterweight connected to the floater by six tendons (one for each corner of the hexagon) in synthetic material. The FOWT platform features a maximum outer diameter of 13 m, with a nondimensional draught-to-diameter ratio of 0.37.
The floater hosts a Tozzi Nord TN535 10-kW wind turbine [78] originally designed for onshore application and, within the present research project, suitably modified in the electrical part for the specific aims of the offshore application (see Figure 1b). The turbine is characterized by a cut-in wind speed v c u t i n = 2 m/s, a rated wind speed v r a t e d = 6.7 m/s, and a cut-out wind speed v c u t o u t = 16 m/s. Figure 2 shows the power curve of the TN535 10 kW wind turbine.
The FOWT was anchored with three drag anchors located at 30 m of water depth through three mooring lines, with ropes breaking load of about 3.3 tons, in catenary configuration at a relative angle of 120 degrees counterclockwise, with M 1 directed toward the breakwater and orthogonal to it. The whole structure, i.e., the floater and the turbine, weighs a total of approximately 11 tons.
The studied dataset includes 12-hour synchronized time histories of the following:
  • The loads applied to one of the three moorings of the platform ( M 3 ) and three of the six tendons connecting the counterweight to the floater ( T 1 , T 5 , and T 6 ), as measured by a system of underwater load cells LCM5404 with work limit load (WLL) of 2.3 tons and 4.5 tons.
  • The acceleration along three coordinate axes ( u ˙ , v ˙ , w ˙ ), the pitch and roll angles ( θ , ϕ ), and the respective angular rates ( θ ˙ , ϕ ˙ ) collected by a Norwegian Subsea MRU 3000 inertial motion unit.
  • The power extracted by the wind turbine (P) estimated by a programmable logic controller (PLC) through a direct measure of the electrical quantities at the generator on board the nacelle of the wind turbine, the rotor angular velocity ( Ω ) measured by two sensors in continuous cross-check, and the relative wind speed ( V w ) through two different anemometers positioned on the nacelle, behind the rotor. All signals were collected by the PLC on the nacelle with a variable but well-known sample frequency of approximately 1 Hz.
  • The wave elevation ( h w ) measured by a pressure transducer integrated into the Acoustic Doppler Current Profiler (ADCP) Teledyne Marine Sentinel V20, located at a distance of approximately 50 m from the FOWT in the southeast direction.
The observed state of the system is hence composed as x = { T 1 , T 5 , T 6 , M 3 , ϕ , θ , ϕ ˙ , θ ˙ , u ˙ , v ˙ , w ˙ , P, Ω , V w , h w }.
The measured significant wave height in the considered time frame is in the order of h s = 1.25 m, and the wave peak period is approximately T p = 6.9 s. The data represent a period of continuous operation for the FOWT in extreme weather conditions for the Mediterranean Sea [79], which the FOWT is designed for: the full-scale significant wave height reaches approximately 8.75 m, and the full-scale wave peak period is about T p = 18 s. The system may show induced strongly nonlinear physical dynamics in such weather conditions. In particular, the intense wind causes a nonlinear behavior in the extracted power and blades’ rotating speed, showing a saturation to the maximum values supported by the machine. This poses the DMD-based methods in a challenging condition for modal analysis and prediction.
An average incoming wave period is identified from the peak in the wave elevation spectrum and used in the following as reference period T ^ = 7.3143 s ( f ^ = 0.1367 Hz).

2.2. Dynamic Mode Decomposition

Dynamic mode decomposition was originally presented in Schmid and Sesterhenn [80] and Schmid [53] to identify spatiotemporal coherent structures from high-dimensional time series data, providing a linear reduced-order representation of possibly nonlinear system dynamics. Given a time series of data, the DMD computes a set of modes with their associated frequencies and decay/growth rates [62]. When the analyzed system is linear, the modes obtained by the DMD correspond to the system’s linear normal modes. The potential of the DMD in the analysis of nonlinear systems comes from its close relation to the spectral analysis of the Koopman operator [68]. The Koopman operator theory is built upon the original work in Koopman [81], defining the possibility of transforming a nonlinear dynamical system into a possibly infinite-dimensional linear system [61,82]. DMD is an equation-free data-driven approach that was shown by Rowley et al. [68] to be a computation of the Koopman operator for linear observables [83].
Its state-of-the-art definition has been given by Tu et al. [82] and is resumed in the following. Consider a dynamical system described by the following:
d x dt = f x , t , γ ,
where x ( t ) R n represents the system’s state at time t, γ contains the parameters of the system, and f ( · ) represents its dynamics, possibly including nonlinearities. Equation (1) can represent various types of systems, including discretized partial differential equations at some discrete spatial points or the evolution of multivariable time series. The DMD analysis approximates the eigenmodes and eigenvalues of the infinite-dimensional linear Koopman operator associated with the system f ( x , t , γ ) , providing a locally (in time) linearized finite-dimensional representation of it ( A R n × n ) based on observed data [54]:
d x dt = A x .
DMD makes no assumption of the underlying physics, i.e., it does not require knowledge of the system governing equations and considers the system f ( x , t , γ ) as an unknown when extracting its data-driven model A .
Once obtained, the DMD model can be used to forecast the system behavior. The solution of the approximated system can be expressed in terms of n eigenvalues ω k and eigenvectors φ k of the matrix A [62]:
x ( t ) = k = 1 n φ k q k ( t ) = k = 1 n φ k b k exp ( ω k t ) ,
where the coefficients b k define the coordinates of the initial condition x 0 in the eigenvector basis, b = Φ 1 x 0 , and Φ is a matrix containing the φ k eigenvectors columnwise.
In practical applications, the state of the system is measured at m discrete time steps t j = j Δ t and can be expressed as x j = x ( j Δ t ) , with j = 1 , , m. Consequently, the approximation can be written as follows:
x j + 1 = A x j , with A = exp ( A Δ t ) .
For each time step j, a snapshot of the system is defined as the column vector collecting the measured full state of the system x j . Two matrices, X and X R n × ( m 1 ) , can be obtained by arranging the available snapshots as follows:
X = x 1 x 2 x m 1 , X = x 2 x 3 x m
such that Equation (4) may be written in terms of these data matrices:
X A X
Hence, the matrix A R n × n can be constructed using the following approximation
A X X
where X is the Moore–Penrose pseudoinverse of X , which minimize | | X AX | | F , where | | ( · ) | | F is the Frobenius norm.
The evaluation of the matrix A , drawing a parallel with other machine learning techniques, practically constitutes the training or learning phase of the method, which is performed in a fast and direct way, i.e., with no need for iterative processes.
In order to evaluate the DMD modes φ k and frequencies ω k , the exact DMD algorithm is applied as described in [82]. The pseudoinverse of X can be efficiently evaluated using singular value decomposition (SVD) as
X = V Σ 1 U * ,
where * denotes the complex conjugate transpose. Due to the low dimensionality of data in the current context, Equation (8) is computed using full SVD decomposition, with no rank truncation. The matrix A ˜ is evaluated by projecting A onto the proper orthogonal decomposition (POD) modes in U as
A ˜ = U * A U ,
and its spectral decomposition can be evaluated as
A ˜ W = W Λ
The diagonal matrix Λ contains the DMD eigenvalues λ k , while the DMD modes φ k constituting the matrix Φ are then reconstructed using the eigenvectors W and the time-shifted data matrix X :
Φ = X V Σ 1 W
The state-variable evolution in time can be approximated by the modal expansion of Equation (3), where ω k = ln ( λ k ) / Δ t , starting from an initial condition corresponding to the end of the measured data b = Φ 1 x m .

2.3. Dynamic Mode Decomposition with Control

The original DMD characterizes naturally evolving dynamical systems. The DMDc aims at extending the algorithm to include in the analysis the influence of forcing inputs and disambiguate it from the unforced dynamics of the system [60]. While DMD is considered in this work for the nowcasting task, DMDc is considered more suitable for the system identification task, obtaining a reduced-order model for the prediction of FOWT platform motions and loads forced by two control variables, namely, wind velocity and wave elevation.
The DMDc formulation can be obtained from a forced dynamic system:
x j + 1 = f ( x j , j , γ , u j ) ,
similarly to Equation (1) and now including the forcing input u R l in the nonlinear mapping describing the evolution of the state. DMDc modeling approximates Equation (12) as
x j + 1 = A x j + B u j
where B R n × l , and the discrete time matrix A R n × n .
Introducing the vector y j R n + l as
y j = x j u j ,
Equation (13) can be rewritten in a form close to Equation (4):
x j + 1 = Gy j , with G = A B .
The collected data are arranged in the matrices Y R ( n + l ) × ( m 1 ) and X R n × ( m 1 ) as follows:
Y = y j y j + 1 y m 1 , X = x j + 1 x j + 2 x m ,
and the DMD approximation of the matrix G R n × ( n + l ) can be obtained from
G X Y ,
where Y is the Moore–Penrose pseudoinverse of Y , which minimizes X GY F . In this way, the matrices A and B are obtained as the ones providing the best fitting of the sampled data in the least squares sense. The pseudoinverse of Y can be again efficiently evaluated using the singular value decomposition (SVD)
G = X V Σ 1 U * .
The DMD approximations of the matrices A and B are obtained by splitting the operator U in U 1 C n × n and U 2 C l × n such that U = [ U 1 U 2 ] T becomes
A = X V Σ 1 U 1 * , B = X V Σ 1 U 2 * .
Due to the low dimensionality of the data in the current context, the pseudoinverse was computed using the full SVD decomposition with no rank truncation. Otherwise, truncating the SVD to rank p, one would have U 1 C n × p and U 2 C l × p .
Once the DMDc approximations of A and B are obtained, the system dynamics can be predicted with Equation (13) with the given initial conditions and the sequence of inputs u j .

2.4. Hankel Extension to DMD and DMDc

The standard DMD and DMDc formulations approximate the Koopman operator based on linear measurements, creating a best-fit linear model linking sequential data snapshots [53,54,84]. In this way, DMD provides a locally linear representation of the dynamics that is unable to capture many essential features of nonlinear systems. The augmentation of the system state is thus the subject of several DMD algorithmic variants [62,85,86,87], aiming to find a coordinate system (or embedding) that spans a Koopman-invariant subspace to search for an approximation of the Koopman operator that is valid also far from fixed points and periodic orbits in a larger space. The need for state augmentation through additional observables is even more critical for applications in which the number of states in the system is small, typically smaller than the number of available snapshots, such as the case at hand. However, there is no general rule for defining these observables and guaranteeing they will form a closed subspace under the Koopman operator [88].
The Hankel-DMD [56] is a specific version of the DMD algorithm that has been developed to deal with the cases of nonlinear systems in which only partial observations are available such that there are latent variables [62]. The state vector is thus augmented, embedding s time-delayed copies of the original variables. The Hankel-DMD with control (Hankel-DMDc) involves, in addition, the augmentation of the input vector with z time-delayed copies of the original forcing inputs. This results in an intrinsic coordinate system that forms an invariant subspace of the Koopman operator (the time delays form a set of observable functions that span a finite-dimensional subspace of Hilbert space, in which the Koopman operator preserves the structure of the system Brunton et al. [57], Pan and Duraisamy [89]). The use of time-delayed copies as additional observables in the DMD has been connected to the Koopman operator as a universal linearizing basis [57], yielding the true Koopman eigenfunctions and eigenvalues in the limit of infinite-time observations [56].
Incorporating time-lagged information in the data used to learn the model, Hankel-DMD and Hankel-DMDc increase the dimensionality of the system, allowing the algorithm to represent a richer phase space of the system, which is essential for capturing nonlinear dynamics of the original variables, even though the underlying DMD algorithm remains linear. In other words, including time-delayed data in the analysis, the Hankel-DMD and Hankel-DMDc can extract linear modes spanning a space of augmented dimensionality that are able to reflect the nonlinearities in the time evolution of the original system through complex relations between present and past states. Hence, the Hankel augmented DMD variants can better represent the underlying dynamics of the systems allowing the capture of their important nonlinear features.
The formulation of the Hankel-DMD can be obtained starting from the DMD presented in Section 2.2. The dynamical system is approximated by Hankel-DMD as
x ^ j + 1 = A ^ x ^ j ,
where the augmented state vector is defined as x ^ j = [ x j , x j 1 , , x j s ] T R n ( s + 1 ) . Two augmented data matrices X ^ and X ^ R n ( s + 1 ) × ( m 1 ) are built as follows:
X ^ = X S , X ^ = X S .
where the Hankel matrices S and S are
S = x j 1 x j x m 2 x j 2 x j 1 x m 3 x j s 1 x j s x m s 1 , S = x j x j + 1 x m 1 x j 1 x j x m 2 x j s x j s + 1 x m s .
In this way, the augmented system matrix A ^ R n ( s + 1 ) × n ( s + 1 ) is approximated with
A ^ X ^ X ^
Following the exact DMD procedure [82] as described in Section 2.2, in order to evaluate the Hankel-DMD modes φ ^ k and frequencies ω ^ k , the pseudoinverse of X ^ is obtained with SVD as
X ^ = V ^ Σ ^ 1 U ^ * ,
and a matrix A ^ ˜ is calculated as
A ^ ˜ = U ^ * A ^ U ^ ,
The spectral decomposition of A ^ ˜ is evaluated as follows:
A ^ ˜ W ^ = W ^ Λ ^ .
The diagonal matrix Λ ^ contains the Hankel-DMD eigenvalues λ ^ k , while the Hankel-DMD eigenvectors φ ^ k constituting the matrix Φ ^ are then reconstructed using the eigenvectors W ^ of the matrix A ^ ˜ and the time-shifted data matrix X ^ :
Φ ^ = X ^ V ^ Σ ^ 1 W ^ .
The time evolution of the augmented state variables can be finally evaluated by the following modal expansion:
x ^ ( t ) = k = 1 n ( s + 1 ) φ ^ k q ^ k ( t ) = k = 1 n ( s + 1 ) φ ^ k b ^ k exp ( ω ^ k t ) ,
where ω ^ k = ln ( λ ^ k ) / Δ t , and the coefficients b ^ k are the coordinates of the augmented initial condition for the prediction x ^ m in the eigenvector basis b ^ = Φ ^ 1 x ^ m .
The prediction of the time evolution of the original state variables x ˜ ( t ) is finally extracted by the augmented state vector isolating its first n components.
The augmentation of the system state with its delayed copies can be similarly applied to DMDc, modifying the formulation presented in Section 2.3. In this case, not only the state but also the input vector is extended with time-shifted copies, leading to the following representation of the dynamic system:
x ^ j + 1 = A ^ x ^ j + B ^ u ^ j
where x ^ j follows the definition given for the Hankel-DMD, and u ^ j = [ u j , u j 1 , , u j z ] T R l ( z + 1 ) is the extended input vector, including the z delayed copies, and B ^ is the augmented system input matrix. In addition to X ^ , the augmented data matrix Y ^ R ( n ( s + 1 ) + l ( z + 1 ) ) × ( m 1 ) is defined as
Y ^ = X S U Z
with the matrix Z showing a Hankel structure defined as
Z = u j 1 u j u m 2 u j 2 u j 1 u m 3 u j z 1 u j z u m z 1 .
The augmented matrix G ^ = [ A ^ , B ^ ] R n s × ( n s + l z ) is approximated in Hankel-DMDc as
G ^ X ^ Y ^ ,
The pseudoinverse of Y ^ is evaluated by SVD as Y ^ = V ^ Σ ^ 1 U ^ * , leading to
G ^ = X ^ V ^ Σ ^ 1 U ^ * .
The Hankel-DMDc approximations of the matrices A ^ and B ^ are obtained by splitting the operator U ^ in U ^ 1 C n ( s + 1 ) × ( n ( s + 1 ) + l ( z + 1 ) ) and U ^ 2 C l ( z + 1 ) × ( n ( s + 1 ) + l ( z + 1 ) ) such that U ^ = [ U ^ 1 U ^ 2 ] T . Equation (29) is hence used to obtain the time evolution of the augmented state vector, and finally, by isolating its first n components, the predicted time evolution of the original state variables x ˜ ( t ) are extracted.

2.5. Bayesian Extension to Hankel-DMD and Hankel-DMDc

The Bayesian extension is introduced to incorporate uncertainty quantification in the analyses, adding confidence intervals to the predictions coming from the numerical methods. Its definition starts by noting that the dimensions and the values within matrices A ^ R n ( s + 1 ) × n ( s + 1 ) and B ^ R n ( s + 1 ) × l ( z + 1 ) depend on three hyperparameters of the algorithms: the observation time length, l t r = t m t 1 and the maximum delay time in the augmented state l d x = t j 1 t j s 1 for Hankel-DMD, with the addition of the maximum delay time in the augmented input l d u = t j 1 t j z 1 for Hankel-DMDc.
These dependencies can be denoted as follows:
Hankel DMD : A ^ = A ^ ( l t r , l d x ) Hankel DMDc : A ^ = A ^ ( l t r , l d x , l d u ) , B ^ = B ^ ( l t r , l d x , l d u ) .
In the Bayesian formulations, the hyperparameters are considered stochastic variables with given probability density functions p ( l t r ) , p ( l d x ) , and p ( l d u ) , introducing uncertainty in the process. Through uncertainty propagation, the solution x ( t ) also depends on l t r , l d x , and, in Bayesian Hankel-DMDc, l d u . At a given time t, the expected value of the solution and its standard deviation can be expressed for the Bayesian Hankel-DMD as
μ x ( t ) = l t r l l t r u l d x l l d x u x ( t , l t r , l d x ) p ( l t r ) p ( l d x ) d l t r d l d x ,
σ x ( t ) = l t r l l t r u l d x l l d x u x ( t , l t r , l d x ) μ x ( t ) 2 p ( l t r ) p ( l d x ) d l t r d l d x 1 2 ,
and, for the Bayesian Hankel-DMDc, as
μ x ( t ) = l d u l l d u u l d x l l d x u l t r l l t r u x ( t , l t r , l d x , l d u ) p ( l t r ) p ( l d x ) p ( l d u ) d l t r d l d x d l d u ,
σ x ( t ) = l d u l l d u u l d x l l d x u l t r l l t r u x ( t , l t r , l d x ) μ x ( t ) 2 p ( l t r ) p ( l d x ) p ( l d u ) d l t r d l d x d l d u 1 2 ,
where l t r l , l d x l , and l d u l and l t r u , l d x u , and l d u u are lower and upper bounds for l t r , l d x , and l d u .
In practice, a uniform probability density function is assigned to the hyperparameters, and a set of realizations is obtained through a Monte Carlo sampling. Accordingly, for each realization of the hyperparameters, the solution x ( t , l t r , l d x ) or x ( t , l t r , l d x , l d u ) is computed, and at a given time t, the expected value and standard deviation of the solution are then evaluated.

3. Performance Metrics

To evaluate the predictions made by the models and to compare the effectiveness of different configurations, three error indices are employed: the normalized mean square error (NRMSE) [76], the normalized average minimum/maximum absolute error (NAMMAE) [76], and the Jensen-Shannon divergence (JSD) [90]. All the metrics are averaged over the variables that constitute the system’s state, providing a holistic assessment of the prediction accuracy. This comprehensive evaluation considers aspects such as overall error, the range, and the statistical similarity of predicted versus measured values.
The NRMSE quantifies the average root mean square error between the predicted values x ˜ t and the measured (test) values x t at different time steps. It is calculated by taking the square root of the average squared differences, which is normalized by k times the standard deviation of the measured values:
NRMSE = 1 N i = 1 N 1 k σ x i 1 T j = 1 T x ˜ i j x i j 2 ,
where N is the number of variables in the predicted state, T is the number of considered time instants, and σ x i is the standard deviation of the measured values in the considered time window for the variable x i .
The NAMMAE metric, introduced in Diez et al. [76,77], provides an engineering-oriented assessment of the prediction accuracy. It measures the absolute difference between the minimum and maximum values of the predicted and measured time series that is normalized by k times the standard deviation of the measured values as follows:
NAMMAE = 1 2 N i = 1 N 1 k σ x i min j ( x ˜ i j ) min j ( x i j ) + max j ( x ˜ i j ) max j ( x i j ) ,
Lastly, the JSD measures the similarity between the probability distributions of the predicted and reference signal [90]. For each variable, it estimates the entropy of the predicted time series probability density function Q relative to the probability density function of the measured time series R, where M is the average of the two [91].
JSD = 1 N i = 1 N 1 2 D ( Q i | | M i ) + 1 2 D ( R i | | M i ) ,
with M = 1 2 ( Q + R ) ,
and D ( K | | H ) = y χ K ( y ) ln K ( y ) H ( y ) .
The JSD is based on the Kullback–Leibler divergence D, given by Equation (43), which is the expectation of the logarithmic difference between the probabilities K and H that are both defined over the domain χ , where the expectation is taken using the probabilities K [92]. The similarity between the distributions is higher when the Jensen–Shannon distance is closer to zero. The JSD is upper bounded by ln(2).
Each of the three indices contributes to the error assessment with its peculiarity, providing a holistic assessment of prediction accuracy:
-
The NRMSE evidences phase, frequency, and amplitude errors between the reference and the predicted signal, evaluating a pointwise difference between the two. However, it is not possible to discern between the three types of error and to what extent each type contributes to the overall value.
-
The NAMMAE indicates if the prediction varies in the same range of the original signal, but it does not give any hint about the phase or frequency similarity of the two.
-
The JSD index is ineffective in detecting phase errors between the predicted and the reference signals and is scarcely able to detect infrequent but large amplitude errors. Instead, it highlights whether the compared time histories assume each value in their range of variation a similar number of times. Hence, it is sensitive to errors in the frequency and trend of the predicted signal.
An example of synergic use of the three is given for the case of a prediction that rapidly goes to zero and evolves with an overly small amplitude. In this case, the NRMSE has a subtle behavior that may mislead the interpretation of the results if used alone: using the definition in Equation (39), the NRMSE would be close to an eighth of the standard deviation of the observed signal. This may be lower than or comparable to the error obtained with a prediction that captures the trend of the observed time history but with a small phase shift, and it may be misleading regarding the real capability of the algorithm at hand. Assessment of the NAMMAE and JSD would help to discriminate the mentioned situation, as those metrics tackle different aspects of the prediction.
An additional time-resolved error index is considered, evaluating the time evolution of the squared root difference between the reference and predicted signal averaged among the variables and normalized by k times the standard deviation of the measured values:
ε ( t ) = 1 N i = 1 N 1 k σ x i x i ˜ ( t ) x i ( t ) 2
The index ε ( t ) is used to investigate the progression of the prediction error in the test time frame and identify possible trends.
The value of k in Equations (39), (40) and (44) is set to 10, indicating a normalizing interval of ± 5 σ that corresponds to a coverage percentage equal to 96% using Chebishev’s inequality.

4. Numerical Setup

This section presents the numerical setup of the DMD for the nowcasting and system identification tasks, along with the preprocessing applied to the data. In this work, the analyses are performed on experimentally measured data, but it is worth noting that the methods also directly apply to other data sources such as simulations of various fidelity levels. All DMD analyses are based on normalized data using the Z-score standardization. Specifically, time histories are shifted and scaled, with the average and variance evaluated on the training set.
A lexicon borrowed from machine learning can be used to describe the workflow for the DMD analyses due to their data-driven nature, even though the peculiarities of the method will cause some differences in the meaning of some terms. Calling present instant the DMD prediction starting point, the observed data lie in the past. Hence, the Hankel-DMD models the matrices A ^ and B ^ , which are built using such past time histories that constitute the training data. The test data, conversely, lie in the future, and test sequences are used to assess the predictive performances of the models.
In the nowcasting approach, the (Bayesian) Hankel-DMD models are trained with sequences from the near past, i.e., ending just before the present instant, and used to forecast short sequences in the future in the order of few wave encounter periods (referring to the time between the passage of two consecutive waves relative to a fixed point on the floating platform). A new training is performed for each test sequence in a sliding window fashion, as sketched in Figure 3.
In the system identification task, the training and the test phases are independent such that a (Bayesian) Hankel-DMDc model is trained on a dedicated sequence only once and taken as a representative ROM for the FOWT. Hence, its performances are tested against multiple test sequences with no changes in the A ^ and B ^ matrices, as suggested by the sketch in Figure 4.
It is worth stressing that, differently from most machine learning methods, training a DMD/DMDc model is not an iterative procedure. In fact, the model is built, i.e., trained, with a direct procedure as described in Section 2.4, identifying the Hankel-DMD modes, the matrix A ^ and, for DMDc, the matrix B ^ .
Figure 5 offers a view of the workflow of the Hankel-DMD (Figure 5a) and Hankel-DMDc (Figure 5b) analyses. The first operation is the collection of the training data to be processed, which are in this work extracted from the existent dataset. This is strictly dependent on the prediction time instant only for the nowcasting using Hankel-DMD. Then, data are fed to the preprocessing step: To provide a common baseline for processing by DMD, time sequences are resampled using a sampling rate Δ t = T ^ / 32 s. The wave encounter frequency f ^ = 1 / T ^ is expected to be relevant in the platform dynamics, which will have, however, relevant energetic content also at higher frequencies. The sampling rate has been hence selected to ensure at least six samples per wavelength up to 5 f ^ , avoiding aliasing. Data are then organized into the matrices X ^ and X ^ based on the hyperparameter values of the DMD method at hand.
The Hankel-DMD/DMDc algorithm is thus applied, obtaining the eigenvalues and eigenvectors of the A ^ matrix and the vector b ^ of the initial conditions for Hankel-DMD in nowcasting or the matrices A ^ and B ^ for Hankel-DMDc in system identification.
The output of the Hankel-DMD or Hankel-DMDc is used to calculate the predicted time series x ˜ ( t ) of the states, which are then compared with the test signals to evaluate the error metrics for performance assessment.
The nowcasting by Hankel-DMD only requires past histories of the variables under analysis, in contrast to the system identification in which the Hankel-DMDc needs the current value of the input time series at each time step to advance with the prediction. In particular, in this study, the wind speed V w and the wave elevation h w as measured by the PLC on the nacelle and the ACDP, respectively, are included in the vector u . The mentioned characteristics make the two approaches very different from each other. The nowcasting is more suitable for short-term predictions, in the range of a few characteristic periods, and appropriate for being applied in real-time digital twins or to model predictive controllers, also exploiting the possibility of continuous learning of the method during the system evolution. The system identification approach is more suited to produce a reduced-order model of the system as a surrogate of the original one that produced the data used for training. Once trained, the ROM can be applied for fast and reliable predictions of the system’s response to given operational conditions of possibly undefined time extension at a reduced computational cost, with potentially useful applications in control, design, life-cycle cost assessment, maintenance, and operational planning, as an alternative to high-fidelity simulations or experiments.
A full-factorial design of experiment is conducted to investigate the influence of the two main hyperparameters of the Hankel-DMD on the nowcasting performances. Five levels of variation are used for l t r = T ^ , 2 T ^ , 4 T ^ , 8 T ^ , 16 T ^ and six for l d x = 0.5 T ^ , T ^ , 2 T ^ , 4 T ^ , 8 T ^ , 16 T ^ , as also outlined in Table 1 in terms of training time steps n t r and number of delayed time histories embedded n d with the current time sampling.
The prediction performance of each configuration is assessed through the NRMSE, NAMMAE, and JSD metrics introduced in Section 3 on a statistical basis using 100 random starting instants as validation cases for prediction inside the 12-hour considered time frame. A forecasting horizon of l t e = 4 T ^ is considered, corresponding to approximately 30 s.
The statistical assessment of the system identification procedure shall also include the choice of the training sequence as an influencing parameter. For this reason, the dataset is subdivided into training, validation, and test portions that are completely separated from each other. Ten different training sets are identified inside the data portion dedicated to training. For each one of them, the effect of the three hyperparameters of Hankel-DMDc on the system identification is analyzed with a full-factorial design of experiment using seven levels for l t r and six levels for l d x and l d u , with the values in Table 2 used.
The prediction performance of each of the 252 configurations is assessed through the NRMSE, NAMMAE, and JSD metrics on a statistical basis using 10 random validation signals extracted from the validation set for a total of 100 evaluations of each error metric per hyperparameters configuration. The prediction time window considered has a length of l t e = 20 T ^ , or approximately 146 s.
The results from the deterministic Hankel-DMD and Hankel-DMDc analyses are used to identify suitable ranges for the hyperparameter values to be used with their Bayesian extensions.

5. Results

This section discusses the DMD analysis of the system dynamics in terms of complex modal frequencies, modal participation, and most energetic modes, and it assesses the results of the DMD-based forecasting method for the prediction of the state evolution.

5.1. Modal Analysis

Figure 6 presents the DMD results for the FOWT system dynamics. Specifically, Figure 6a shows the complex modal frequencies identified by DMD with no state augmentation. These were ranked in Figure 6b based on the normalized energy of the respective modal coordinate signal:
q ^ k 2 = q k 2 k = 1 n q k 2 , with q k 2 = q k , q k = t i t f q k 2 ( t ) d t .
As can be seen in Figure 6c, the dynamic is dominated by four couples of complex conjugate modes. Their components’ magnitudes are presented in Figure 6d. Modes (1,2) and (7,8) show a slow (frequency around 0.0237 Hz, period around 42.3 s) and a faster (frequency around 0.0745 Hz, period around 13.5 s) coupling between the loads on tendons (and weakly on moorings) and the platform motion. Mode (3,4)’s variable participation suggests that the mode describes a slow wave height oscillation (frequency 0.00022 Hz, with a period of almost 1 h and 15 min) influencing the platform motion variables, both angular and linear. Finally, mode (5,6) identifies a subsystem describing the variation in extracted power and turbine rotational speed with wind speed (frequency 0.039 Hz, period around 25.4 s). The power extracted by the wind turbine and the rpm of its blades are almost not involved in the description of the floating motions and scarcely influenced by them, despite the extreme operating conditions that could have a coupling effect through large motions, as well as hydro- and aerodynamic nonlinearities.

5.2. Nowcasting via Hankel-DMD

As anticipated in Section 4, a full-factorial combination of settings of the Hankel-DMD hyperparameters was tested to investigate their influence on the forecasting capability of the method. The results, gathering the outcomes from all the algorithm setups and test cases, are outlined in Figure 7 as boxplots for the NRMSE, NAMMAE, and JSD, focusing on a length of the test signal l t e = 4 T ^ . The boxplots show the first, second (equivalent to the median value), and third quartiles, while the whiskers extend from the box to the farthest data point lying within 1.5 times the inter-quartile range, which is defined as the difference between the third and the first quartiles from the box. The diamonds indicate the average of the results for each combination. Outliers are not shown to improve the readability of the plot.
It can be noted that the three metrics indicate different best hyperparameter settings. Their values are reported in Table 3. However, the different types of errors targeted by the three metrics help to interpret the results and gain some insight:
(a)
Long training signals with few delayed copies in the augmented state showed poor prediction capabilities, as confirmed by all the metrics for both the short-term and mid-term time windows. The effect is notable for l d x l t r < 1 / 8 .
(b)
A high number of embedded time-delayed signals with insufficiently long training lengths was prone to producing NRMSEs, progressively reducing the metric’s dispersion around the value of an eighth of the standard deviation of the observed signals. This happens, with the explored values of l d x , particularly for l t r = T ^ , 2 T ^ , and, to a lesser extent, 4 T ^ when l d x l t r exceeded 1. At the same time, the NAMMAE and JSD values for the same settings are progressively increasing their values; this indicates that the predicted signals are not able to catch the maximum and minimum values of the reference sequences and that the distribution of the predicted data is not adherent to the true data advancing in time. The combination of those two behaviors is due to the method generating numerous rapidly decaying predictions, whose signals become highly damped after a short observation time.
A suitable range for the hyperparameter settings to obtain accurate results can be inferred by combining the above considerations. The best deterministic results were obtained when 4 l t r / T ^ 16 and l d x / l t r = 1 / 4 .
Figure 8, Figure 9, Figure 10 and Figure 11 show the forecast by the Hankel-DMD for random test sequences taken as representative nowcasting examples. The figures show the last part of the training sequence with a solid black line, the test sequence in a dashed black line, and the prediction obtained with the BestNRMSE, BestNAMMAE, and BestJSD hyperparameters, as reported in Table 3, with an orange dash-dotted, yellow dotted, and purple dashed line, respectively.
The forecast data are in fairly good agreement with the measurements, particularly for the BestNAMMAE and BestJSD lines. Confirming the previous statement, some predictions of the BestNRMSE show a rapidly decaying amplitude not following the ground truth; see, e.g., T 1 , T 5 , θ , ϕ ˙ , v ˙ , w ˙ , and h w in Figure 10 or ϕ ˙ and θ ˙ in Figure 8.
As expected from the nowcasting algorithm, the forecasting accuracy is higher at the beginning of the prediction, see the ϵ trend in Figure 8, Figure 9, Figure 10 and Figure 11, but in most cases is satisfactory for the entire window. The variables reproduced with the highest errors are the extracted power P, the blade rotational speed Ω , and the wind speed V w . Their time histories estimated by the PLC on the nacelle do not show strong periodicity in the observed time frame and, moreover, show strong nonlinearities: the turbine control algorithms limit the maximum power extracted by the turbine and its blades’ rotational speed for example, acting as a saturator when the wind is too strong. In addition, and partially as a consequence of the above, these variables are seen to form an almost separate subsystem in the modal analysis; hence, a limited portion of the data could be actually used by the DMD to extract related modal content and produce their forecast, putting the method in a very challenging situation.

5.3. Nowcasting via Bayesian Hankel-DMD

As highlighted by the authors in previous works [59,76,77,93], the final prediction from DMD-based models may strongly vary for different hyperparameters settings, and no general rule is given for the determination of their optimal values. Aiming at including uncertainty quantification in the prediction and making the prediction more robust, the Bayesian extension of the Hankel-DMD considers the hyperparameters of the method as stochastic variables with uniform probability density functions.
The insights gained in the deterministic analysis are applied to determine suitable variation ranges for l t r and l d x . In particular, a probabilistic length of the training time history is considered, uniformly distributed between 4 to 16 incoming wave periods as l t r / T ^ U ( 4 , 16 ) . Moreover, for each realization of l t r , l d x is also taken as a probabilistic variable, being uniformly distributed in the interval l d x U l t r 8 ,   l t r (each actual n d is taken as its respective integer part).
The solid blue lines in Figure 8, Figure 9, Figure 10 and Figure 11 show the expected value of the Bayesian predictions on representative test set sequences obtained using 100 uniformly distributed Monte Carlo realizations; the blue shadowed areas indicate the 95% confidence interval of the uncertain predictions. The Bayesian Hankel-DMD provides fairly good forecasting of the measured quantities and the same considerations made in the deterministic analysis hold. The biggest discrepancies are noted in the prediction of P, Ω , and V w , which also show the largest uncertainty bands, suggesting poor accuracy.
Figure 12 compares the error metrics from the Bayesian Hankel-DMD and the BestNRMSE, BestNAMMAE, and BestJSD configurations of the deterministic analysis for l t e / T ^ = 4 . The mean values of the metrics are summarized in Table 3. The boxplots indicate that Bayesian nowcasting achieved results comparable to the best deterministic configurations for NAMMAE and JSD, slightly improving their NRMSEs. This appears remarkable considering the poor NAMMAE and JSD performance of the BestNRMSE configuration.
The forecasting time window here considered, i.e., l t e = 4 T ^ (∼30 s), can be assumed satisfactory from the perspective of the design of a model predictive or feedforward controller according to Ma et al. [39], where a five-second horizon was considered and found to be sufficient. Moreover, the nowcasting algorithm is inherently suitable for continuous learning and digital twinning, adapting the model and predictions along with the incoming data from an evolving changing system and environment.
It shall be noted that the evaluation time for a single Hankel-DMD training, averaged over 10 realizations using 100 different hyperparameter values in the intervals of the Bayesian analysis, ranged from μ t = 0.029 s, σ t = 0.0041 s to μ t = 1.225 s, σ t = 0.0159 s on a mid-end laptop with an Intel Core i5-1235U CPU and 16 Gb of memory using a noncompiled MATLAB 2023a code, depending on the algorithm setup (longer training signals with more delays require more time). The computational cost for obtaining a Bayesian Hankel-DMD prediction linearly depends on the number of Monte Carlo samples used in the hyperparameters sampling: for each hyperparameter combination, a different Hankel-DMD model has to be evaluated. However, the task is embarrassingly parallel, and, with a sufficient number of computational units, the actual computational time can be kept as low as a single deterministic evaluation. The one-shot and fast training phase, related to the direct linear algebra operations detailed in Section 2.2, makes the Hankel-DMD and Bayesian Hankel-DMD algorithms very promising in the context of real-time forecasting and nowcasting.

5.4. System Identification via Hankel-DMDc

The results from the design of experiment on system identification via the deterministic Hankel-DMDc are reported in terms of boxplots graphs in Figure 13, Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18 combining the outcomes of the 100 training/validation sequences combinations for each hyperparameter configuration. The metrics are evaluated using a prediction timeframe of l t e = 20 T ^ (∼146 s).
Analyzing the boxplots, it can be noted that some configurations of the hyperparameters are characterized by very high values of all the metrics. The A matrices of the pertaining Hankel-DMDc models have been noted to have eigenvalues with positive real parts, causing the predictions to be unstable. As pointed out by Rains et al. [94], DMDc algorithms are particularly susceptible to the choice of the model dimensions and prone to identify spurious unstable eigenvalues. In the nowcasting approach with Hankel-DMD, this phenomenon can be effectively mitigated by projecting the unstable discrete time eigenvalues onto the unit circle, i.e., stabilizing them. However, in system identification with Hankel-DMDc, the B matrix is also affected by the identification of unstable eigenvalues, and no simple stabilization procedure is available.
Table 4 reports the value of the hyperparameters for the configurations producing the best average value of the NRMSE, NAMME, and JSD. It emerges that the most accurate predictions are obtained with high values of training length of l t r < 175 T ^ . The noisy nature of the experimental measures and the variability of operational conditions encountered by the FOWT during the acquisitions can partially explain this result. A large l t r implies a training signal that extends over a broader set of operative situations, allowing the Hankel-DMDc to capture the system’s relevant features with increased effectiveness and, hence, to obtain good predictions for unseen input sequences. A subdomain of the explored range for the hyperparameters value, defined as 175 T ^ l t r 200 T ^ , 2 T ^ l d x 4 T ^ , and 20 T ^ l d u 40 T ^ , includes configurations with low values of all the metrics. The BestNAMMAE and BestJSD configurations are similar and lay in this range for which also the NRMSE showed small values (although not the best one). On the contrary, the BestNRMSE setup is characterized by a small value of l d u in a range that showed higher NAMMAE and JSD errors.
Figure 19, Figure 20, Figure 21 and Figure 22 show the predictions by the Hankel-DMDc for random test sequences taken as representatives. The figures show the input variables with a solid black line, the test sequence in a dashed black line, and the prediction obtained with the BestNRMSE, BestNAMMAE, and BestJSD hyperparameters, as reported in Table 4, with an orange dash-dotted, yellow dotted, and purple dashed line, respectively. BestNAMMAE and BestJSD produces similar predictions, following their similarity in hyperparameters values. The BestNRMSE line behaves slightly differently from the other two and seems to capture less high-frequency oscillations of the test sequence.
As a general consideration, even though a phase-resolved agreement is not always obtained, the predictions show a strong statistical similarity with the test signals, as testified both qualitatively by Figure 19, Figure 20, Figure 21 and Figure 22 and quantitatively by the JSD boxplots. It can be noted that, for all the plotted configurations, the value of ε does not show a monotonically increasing trend as the prediction time progresses, or in other words, the prediction accuracy is not negatively proportional to the prediction horizon. This suggests that the systems identified by the Hankel-DMDc may keep the shown level of accuracy indefinitely in time. This characteristic is fundamental for the proposed usages of the method.

5.5. System Identification via Bayesian Hankel-DMDc

As it was done for the nowcasting, the Bayesian extension of the Hankel-DMDc algorithm for system identification is obtained by exploiting the insights on the hyperparameters derived from the deterministic analysis, in particular for the identification of a suitable range of variation of l t r , l d x , and l d u . The three hyperparameters are treated as probabilistic variables uniformly distributed in l t r / T ^ U ( 175 , 200 ) , l d x / T ^ U ( 2 , 4 ) , and l d u / T ^ U ( 20 , 40 ) (the actual n d x and n d u are taken as the nearest integers from the calculated values).
Figure 19, Figure 20, Figure 21 and Figure 22 show a comparison between the prediction by the Bayesian system identification and the deterministic best configurations, with the blue shadowed area representing the 95% confidence interval of the stochastic prediction. As observed for the deterministic analysis, no accuracy degradation trend with the prediction time is noted for the Bayesian prediction. The Bayesian expectation is very close to the BestNAMMAE and BestJSD lines, with small uncertainty. This reflects a general robustness of the deterministic predictions in the range of variation of the stochastic parameters, which has been already noted in the similarity between the BestNAMMAE and BestJSD solutions.
A comparison between the best deterministic configurations and the Bayesian system identification in terms of the NRMSE, NAMMAE, and JSD metrics is presented in Figure 23 for l t e = 20 T ^ . The Bayesian approach improves the NRMSE compared to the deterministic solutions, preserving the NAMMAE and JSD results that are only slightly degraded.
The effectiveness of the Bayesian Hankel-DMDc ROM as a surrogate model is further assessed by statistically comparing the probability density functions (PDFs) of each variable as obtained from the experimental measurements and the Bayesian system identification time series. In particular, a moving block bootstrap (MBB) method is employed to define 100 time series for the analysis, following [95]. The PDF of each time series, for both the experimental data and expected value of ROM predictions, are obtained using kernel density estimation [96] as follows:
PDF x , y = 1 T h i = 1 T K y x i h .
Here, K is a normal kernel function defined as
K ξ = 1 2 π exp ξ 2 2 ,
where h = σ x T 1 / 5 is the bandwidth [97]. In this way, a set of 100 PDFs is obtained for each variable of the system state x , introducing uncertainty in their estimation. Hence, an expected value and a confidence interval are calculated for the PDF of each variable for both the experimental data and expected value of ROM predictions. The quantile function q is evaluated at probabilities p = 0.025 and 0.975 , defining the lower and upper bounds of the 95% confidence interval as U P D F ( ξ , y ) = P D F ( ξ , y ) q = 0.95 P D F ( ξ , y ) q = 0.025 . Figure 24 shows, per each variable, the expected value of the PDFs over the bootstrapped series as solid lines, with the 95% confidence interval shown as shaded areas.
The results show a good overall agreement between the distributions obtained from the ROM and experimental measurements for all the system variables. In other words, the statistics of motion and loads emerging from the real data are effectively captured over the bootstrapped histories by the Bayesian Hankel-DMDc ROM. It may be noted that the tails of the measured PDFs, representing large amplitudes, are particularly well matched by the predictions for motion variables. The largest differences are observed for the derivative of the pitch and roll angles in the small values range. The confidence intervals of the Bayesian Hankel-DMDc predictions adequately cover the PDF of the measured data, indicating that the predictions are accurate and reliable. The statistics of the JSD metrics, in terms of the expected value and confidence interval as evaluated on the bootstrapped time series for the eleven variables, are reported in Table 5 to quantify the differences between the experimental and DMD distributions in Figure 24. The analysis confirms the similarity between the probability distributions of Bayesian Hankel-DMDc and real data sequences, as JSD showed small expected values and uncertainty. The adherence between the experimental and predicted PDF for the system variables confirms that the ROM identified by the Hankel-DMDc can be effectively applied in unseen scenarios, providing accurate and reliable predictions from the knowledge of the input variables V w and h w . The statistical accuracy of the system identification is of paramount importance for the application of the method in lifecycle assessment and maintenance planning.
The training time for the system identification is investigated as in Section 5.3 on the same mid-end laptop, averaging over 10 realizations using 100 different hyperparameter values in the intervals of the Bayesian analysis. The larger sizes of data managed compared to the nowcasting task results in larger matrices and longer times, ranging from μ t = 21.512 s, σ t = 1.493 s to μ t = 41.116 s, σ t = 3.027 s, though these results are still computationally cheaper compared to high-fidelity simulations or typical training of deep learning methods, as well as more data-lean compared to the latter ones. The computational cost of the Bayesian system identification is linearly dependent on the number of Monte Carlo realizations used in the sampling of the hyperparameters combinations, as a different Hankel-DMDc model is evaluated for each combination. However, as mentioned for nowcasting, the task is embarrassingly parallel, and the actual time required to obtain a Bayesian prediction can be kept close to the single deterministic evaluation time by using multiple computational units.

6. Conclusions

To the best of the authors’ knowledge, this work represents the first use of DMD and its variants to extract knowledge, as well as forecast and identify the system dynamics of a real-life FOWT using experimental field data. The approach also applies to other data sources such as simulations of various fidelity levels with no modifications.
Modal analysis revealed strong couplings between floater motions, tendons/moorings loads, and wave elevation, demonstrating the method’s ability to capture complex system interactions.
Hankel-DMD successfully demonstrated data-driven, equation-free, and data-lean short-term forecasting (nowcasting) of FOWT motions and loads, achieving accurate predictions up to four wave encounters with a computational cost compatible with real-time execution and the needed accuracy for applications such as model predictive control and digital twinning. Spanning an augmented coordinate system by incorporating time-delayed variables in the system state, Hankel-DMD effectively captures the essential nonlinear features of FOWT dynamics within a linear framework. Uncertainty quantification is introduced through a novel Bayesian extension of the Hankel-DMD by considering the hyperparameters of the method as stochastic variables, with prior distribution identified from a full-factorial deterministic analysis. A systematic difficulty was encountered in predicting the turbine extracted power P and blades RPM Ω , which is partially explained by the very weak coupling observed in the modal analysis between the mentioned variables and the FOWT motions and loads: this dramatically reduced the size of data explaining the dynamics of P and Ω that is available to the Hankel-DMD method to extract a meaningful data-driven model for them. An additional challenge for the DMD-based method is represented by the large nonlinear characteristics of the power and RPM signals due to the intervention of the turbine control system.
The system identification task is performed by applying the Hankel-DMDc method, which models the FOWT as an externally forced dynamical system considering wave elevation and wind speed as inputs. This approach yielded data-driven equation-free reduced-order models capable of extended prediction horizons using small data, providing continuous knowledge of the input variables. Such models provide statistically accurate predictions with significant computational efficiency, making them valuable for applications in control systems, life-cycle assessments, and maintenance planning. A novel Bayesian extension is obtained for incorporating uncertainty quantification in the Hankel-DMDc by using the same rationale of the Bayesian Hankel-DMD.
Despite challenges posed by noisy data and strong nonlinearities from extreme conditions, the DMD-based methods showed promising results for both nowcasting and system identification. The complexity and variety of operational conditions in the experimental data have been, on one side, precious to test the DMD-based system identification in a realistic environment; on the other side, they posed a significant challenge for system identification, requiring long training signals and numerous delayed variables.
Future work will explore hybridizing DMD methods with machine learning-based approaches to further improve noise rejection and nonlinear feature capturing while preserving the DMD’s inherent characteristics. In addition, a different approach for the system identification task shall also be tested for model parametrization and generalization: firstly applying the Hankel-DMDc to data collected in more specific conditions/excitations (such as single wave headings, specific magnitude ranges, wind direction, etc.) and secondly using parametric reduced-order models interpolation [98,99]. Enhancements to the experimental setup, such as multiple wave elevation measurements and additional load cells, would also provide more comprehensive data for improved system identification, adding, for example, meaningful information about waves and wind directions.

Author Contributions

G.P.: Methodology, Software, Investigation, Validation, Formal analysis, Writing—Original Draft, Visualization. A.B.: Investigation, Data curation, Resources. A.L.: Investigation, Data curation, Resources. C.P.: Investigation, Data curation, Resources. A.S.: Methodology, Writing—Review and Editing. C.L.: Investigation, Data curation, Resources, Project administration, Funding acquisition. M.D.: Conceptualization, Methodology, Resources, Writing—Review and Editing, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported thorugh the “Ricerca di Sistema” project (RSE, 1.8 “Energia elettrica dal mare”) by the Italian Ministry of the Environment and Energy Safety (Ministero dell’Ambiente e della Sicurezza Energetica, MASE), CUP B53C22008560001, and through “Network 4 Energy Sustainable Transition–NEST”, Project code PE0000021, Concession Decree No. 1561 of 11.10.2022, by the Italian Ministry of University and Research (Ministero dell’Università e della Ricerca, MUR), CUP B53C22004060006.

Data Availability Statement

The original data presented in the study are openly available at https://cnrsc-my.sharepoint.com/:f:/g/personal/giorgio_palma_cnr_it/EvegVNzuvytGpPio8gj8IJwBAdSfNIfYJ7dniapE0G98bA?e=AvrcRj accessed on 21 March 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

List of Symbols and Abbreviations

Abbreviations
ADCPacoustic doppler current profiler
DMDcdynamic mode decomposition with control
DMDdynamic mode decomposition
EMDempirical mode decomposition
EUEuropean Union
FOWTfloating offshore wind turbine
GRNNgated recurrent neural network
JSDJensen-Shannon divergence
LCOElevelized cost of energy
LSTMlong short-term memory
MBBmoving block bootstrap
NAMMAEnormalized average minimum/maximum absolute error
NRMSEnormalized mean square error
PDFprobability density function
PLCprogrammable logic controller
PODproper orthogonal decomposition
ROMreduced order model
SVDsingular value decomposition
Greek symbols
μ x ( t ) time history of the expected value of the components of the vector x ( t )
σ x ( t ) time history of the standard deviation of the components of the vector x ( t )
Δ t time step for discretization
ϕ ˙ floater roll velocity
θ ˙ floater pitch velocity
μ ( · ) expected value of ( · )
Ω wind turbine rotor angular velocity
ω k k-th eigenvalue of A , k = 1 , , n
ϕ floater roll angle
σ ( · ) standard deviation of ( · )
θ floater pitch angle
φ k k-th eigenvector of A
Symbols
( · ) * Hermitian transpose operator on ( · )
( · ) T non conjugate transpose operator on ( · )
( · ) Moore-Penrose pseudoinverse of ( · )
( · ) j j-th time snapshot of ( · ) , j = 1 , , m
u ˙ floater acceleration along x direction
v ˙ floater acceleration along y direction
w ˙ floater acceleration along z direction
x ^ augmented system state vector
x ^ j snapshot of the augmented system state vector
f ^ floater wave encounter frequency
T ^ floater wave encounter period. reference period for non-dimensional time
· , · inner product
A discrete time system state matrix
B discrete time system input matrix
S matrix collecting delayed system state vector snapshots excluding the first
S matrix collecting delayed system state vector snapshots excluding the last
u augmented system input vector
u system input vector
X matrix collecting system state vector snapshots excluding the first
X matrix collecting system state vector snapshots excluding the last
x system state vector
x 0 initial condition of the system state vector
Z matrix collecting delayed system input vector snapshots excluding the last
A continuous time system matrix
A ^ augmented discrete time system state matrix
B ^ augmented discrete time system input matrix
b i i-th coordinate of the initial condition in the eigenvector basis
h w wave elevation
l d u maximum delay time in augmented input
l d x maximum delay time in augmented state
l t e test signal time length
l t r training signal time length
n d u number of maximum delay samples in augmented input
n d x number of maximum delay samples in augmented state
n t r number of samples in training signal
Ppower extracted by the wind turbine
ttime
V w relative wind speed at the wind turbine rotor
M i load at the i-th mooring of the FOWT platform
T i load at the i-th tendon of the floater counterweight

References

  1. Falkner, R. The Paris Agreement and the new logic of international climate politics. Int. Aff. 2016, 92, 1107–1125. [Google Scholar]
  2. Nastasi, B.; Markovska, N.; Puksec, T.; Duić, N.; Foley, A. Renewable and sustainable energy challenges to face for the achievement of Sustainable Development Goals. Renew. Sustain. Energy Rev. 2022, 157, 112071. [Google Scholar] [CrossRef]
  3. International Energy Agency (IEA). Net Zero by 2050; IEA: Paris, France, 2021. [Google Scholar]
  4. The European Parliament and Council. Directive (EU) 2018/2001 of the European Parliament and of the Council of 11 December 2018 on the Promotion of the Use of Energy from Renewable Sources; The European Parliament and Council: Brussels, Belgium, 2018. [Google Scholar]
  5. Prakash, G.; Anuta, H.; Wagner, N.; Gallina, G.; Gielen, D.; Gorini, R. Future of Wind-Deployment, Investment, Technology, Grid Integration and Socio-Economic Aspects; International Renewable Energy Agency (IRENA): Masdar City, Abu Dhabi, 2019. [Google Scholar]
  6. Konstantinidis, E.I.; Botsaris, P.N. Wind turbines: Current status, obstacles, trends and technologies. IOP Conf. Ser. Mater. Sci. Eng. 2016, 161, 012079. [Google Scholar] [CrossRef]
  7. Pustina, L.; Lugni, C.; Bernardini, G.; Serafini, J.; Gennaretti, M. Control of power generated by a floating offshore wind turbine perturbed by sea waves. Renew. Sustain. Energy Rev. 2020, 132, 109984. [Google Scholar] [CrossRef]
  8. Pustina, L.; Serafini, J.; Pasquali, C.; Solero, L.; Lidozzi, A.; Gennaretti, M. A novel resonant controller for sea-induced rotor blade vibratory loads reduction on floating offshore wind turbines. Renew. Sustain. Energy Rev. 2023, 173, 113073. [Google Scholar] [CrossRef]
  9. López-Queija, J.; Robles, E.; Jugo, J.; Alonso-Quesada, S. Review of control technologies for floating offshore wind turbines. Renew. Sustain. Energy Rev. 2022, 167, 112787. [Google Scholar] [CrossRef]
  10. Global Wind Energy Council (GWEC). Global Offshore Wind Report 2020; GWEC: Brussels, Belgium, 2020; Volume 19, pp. 10–12. [Google Scholar]
  11. McMorland, J.; Collu, M.; McMillan, D.; Carroll, J. Operation and maintenance for floating wind turbines: A review. Renew. Sustain. Energy Rev. 2022, 163, 112499. [Google Scholar] [CrossRef]
  12. Florian, M.; Sørensen, J.D. Wind Turbine Blade Life-Time Assessment Model for Preventive Planning of Operation and Maintenance. J. Mar. Sci. Eng. 2015, 3, 1027–1040. [Google Scholar] [CrossRef]
  13. de Azevedo, H.D.M.; Araújo, A.M.; Bouchonneau, N. A review of wind turbine bearing condition monitoring: State of the art and challenges. Renew. Sustain. Energy Rev. 2016, 56, 368–379. [Google Scholar] [CrossRef]
  14. Jonkman, J. Dynamics Modeling and Loads Analysis of an Offshore Floating Wind Turbine; University of Colorado at Boulder: Boulder, CO, USA, 2007. [Google Scholar] [CrossRef]
  15. Si, Y.; Karimi, H.R.; Gao, H. Modelling and optimization of a passive structural control design for a spar-type floating wind turbine. Eng. Struct. 2014, 69, 168–182. [Google Scholar] [CrossRef]
  16. Li, X.; Gao, H. Load Mitigation for a Floating Wind Turbine via Generalized H Structural Control. IEEE Trans. Ind. Electron. 2016, 63, 332–342. [Google Scholar] [CrossRef]
  17. Dong, H.; Edrah, M.; Zhao, X.; Collu, M.; Xu, X.; K A, A.; Lin, Z. Model-Free Semi-Active Structural Control of Floating Wind Turbines. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 4216–4220. [Google Scholar] [CrossRef]
  18. Tetteh, E.Y.; Fletcher, K.; Qin, C.; Loth, E.; Damiani, R. Active Ballasting Actuation for the SpiderFLOAT Offshore Wind Turbine Platform. In Proceedings of the AIAA SCITECH 2022 Forum, San Diego, CA, USA, 3–7 January 2022. [Google Scholar] [CrossRef]
  19. Galván, J.; Sánchez-Lara, M.J.; Mendikoa, I.; Pérez-Morán, G.; Nava, V.; Rodríguez-Arias, R. NAUTILUS-DTU10 MW Floating Offshore Wind Turbine at Gulf of Maine: Public numerical models of an actively ballasted semisubmersible. J. Phys. Conf. Ser. 2018, 1102, 012015. [Google Scholar] [CrossRef]
  20. Grant, E.; Johnson, K.; Damiani, R.; Phadnis, M.; Pao, L. Buoyancy can ballast control for increased power generation of a floating offshore wind turbine with a light-weight semi-submersible platform. Appl. Energy 2023, 330, 120287. [Google Scholar] [CrossRef]
  21. Palraj, M.; Rajamanickam, P. Motion control studies of a barge mounted offshore dynamic wind turbine using gyrostabilizer. Ocean. Eng. 2021, 237, 109578. [Google Scholar] [CrossRef]
  22. Salic, T.; Charpentier, J.F.; Benbouzid, M.; Le Boulluec, M. Control Strategies for Floating Offshore Wind Turbine: Challenges and Trends. Electronics 2019, 8, 1185. [Google Scholar] [CrossRef]
  23. Tiwari, R.; Babu, N.R. Recent developments of control strategies for wind energy conversion system. Renew. Sustain. Energy Rev. 2016, 66, 268–285. [Google Scholar] [CrossRef]
  24. Awada, A.; Younes, R.; Ilinca, A. Review of Vibration Control Methods for Wind Turbines. Energies 2021, 14, 3058. [Google Scholar] [CrossRef]
  25. Larsen, T.J.; Hanson, T.D. A method to avoid negative damped low frequent tower vibrations for a floating, pitch controlled wind turbine. J. Phys. Conf. Ser. 2007, 75, 012073. [Google Scholar] [CrossRef]
  26. Jonkman, J. Influence of Control on the Pitch Damping of a Floating Wind Turbine. In Proceedings of the 46th AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, USA, 7–10 January 2008. [Google Scholar] [CrossRef]
  27. Yu, W.; Lemmer, F.; Schlipf, D.; Cheng, P.W.; Visser, B.; Links, H.; Gupta, N.; Dankemann, S.; Counago, B.; Serna, J. Evaluation of control methods for floating offshore wind turbines. J. Phys. Conf. Ser. 2018, 1104, 012033. [Google Scholar] [CrossRef]
  28. Bakka, T.; Karimi, H. Robust H Dynamic Output Feedback Control Synthesis with Pole Placement Constraints for Offshore Wind Turbine Systems. Math. Probl. Eng. 2012, 2012, 616507. [Google Scholar] [CrossRef]
  29. Betti, G.; Farina, M.; Marzorati, A.; Scattolini, R.; Guagliardi, G.A. Modeling and control of a floating wind turbine with spar buoy platform. In Proceedings of the 2012 IEEE International Energy Conference and Exhibition (ENERGYCON), Florence, Italy, 9–12 September 2012; pp. 189–194. [Google Scholar] [CrossRef]
  30. Lemmer (né Sandner), F.; Yu, W.; Schlipf, D.; Cheng, P.W. Robust gain scheduling baseline controller for floating offshore wind turbines. Wind. Energy 2020, 23, 17–30. [Google Scholar] [CrossRef]
  31. Sarkar, S.; Fitzgerald, B.; Basu, B. Individual Blade Pitch Control of Floating Offshore Wind Turbines for Load Mitigation and Power Regulation. IEEE Trans. Control Syst. Technol. 2021, 29, 305–315. [Google Scholar] [CrossRef]
  32. Harris, M.; Hand, M.; Wright, A. Lidar for Turbine Control: March 1, 2005-November 30, 2005; Technical Report; NREL—National Wind Technology Center: Boulder, CO, USA, 2006. [Google Scholar] [CrossRef]
  33. Laks, J.; Pao, L.; Wright, A.; Kelley, N.; Jonkman, B. Blade Pitch Control with Preview Wind Measurements. In Proceedings of the 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, Orlando, FL, USA, 4–7 January 2010; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2010. Chapter 2010-251. pp. 1–24. [Google Scholar] [CrossRef]
  34. Dunne, F.; Pao, L.; Wright, A.; Jonkman, B.; Kelley, N. Combining Standard Feedback Controllers with Feedforward Blade Pitch Control for Load Mitigation in Wind Turbines. In Proceedings of the 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, Orlando, FL, USA, 4–7 January 2010; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2010. Chapter AIAA 2010-250. pp. 1–18. [Google Scholar] [CrossRef]
  35. Dunne, F.; Pao, L.; Wright, A.; Jonkman, B.; Kelley, N.; Simley, E. Adding Feedforward Blade Pitch Control for Load Mitigation in Wind Turbines: Non-Causal Series Expansion, Preview Control, and Optimized FIR Filter Methods. In Proceedings of the 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, Orlando, FL, USA, 4–7 January 2011; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2011. Chapter 2011-819. pp. 1–17. [Google Scholar] [CrossRef]
  36. Dunne, F.; Schlipf, D.; Pao, L.; Wright, A.; Jonkman, B.; Kelley, N.; Simley, E. Comparison of Two Independent Lidar-Based Pitch Control Designs. In Proceedings of the 50th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, Nashville, TN, USA, 9–12 January 2012; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2012. Chapter 2012-1151. pp. 1–19. [Google Scholar] [CrossRef]
  37. Raach, S.; Schlipf, D.; Sandner, F.; Matha, D.; Cheng, P.W. Nonlinear model predictive control of floating wind turbines with individual pitch control. In Proceedings of the 2014 American Control Conference, Portland, OR, USA, 4–6 June 2014; pp. 4434–4439. [Google Scholar] [CrossRef]
  38. Schlipf, D.; Simley, E.; Lemmer, F.; Pao, L.; Cheng, P.W. Collective Pitch Feedforward Control of Floating Wind Turbines Using Lidar. J. Ocean. Wind. Energy 2015, 2. [Google Scholar] [CrossRef]
  39. Ma, Y.; Sclavounos, P.D.; Cross-Whiter, J.; Arora, D. Wave forecast and its application to the optimal control of offshore floating wind turbine for load mitigation. Renew. Energy 2018, 128, 163–176. [Google Scholar] [CrossRef]
  40. Simley, E.; Bortolotti, P.; Scholbrock, A.; Schlipf, D.; Dykes, K. IEA Wind Task 32 and Task 37: Optimizing Wind Turbines with Lidar-Assisted Control Using Systems Engineering. J. Phys. Conf. Ser. 2020, 1618, 042029. [Google Scholar] [CrossRef]
  41. Fontanella, A.; Al, M.; van Wingerden, J.W.; Belloli, M. Model-based design of a wave-feedforward control strategy in floating wind turbines. Wind. Energy Sci. 2021, 6, 885–901. [Google Scholar] [CrossRef]
  42. Al, M.; Fontanella, A.; van der Hoek, D.; Liu, Y.; Belloli, M.; van Wingerden, J.W. Feedforward control for wave disturbance rejection on floating offshore wind turbines. J. Phys. Conf. Ser. 2020, 1618, 022048. [Google Scholar] [CrossRef]
  43. Otter, A.; Murphy, J.; Pakrashi, V.; Robertson, A.; Desmond, C. A review of modelling techniques for floating offshore wind turbines. Wind. Energy 2022, 25, 831–857. [Google Scholar] [CrossRef]
  44. Høeg, C.E.; Zhang, Z. A semi-analytical hydrodynamic model for floating offshore wind turbines (FOWT) with application to a FOWT heave plate tuned mass damper. Ocean. Eng. 2023, 287, 115756. [Google Scholar] [CrossRef]
  45. Wang, K.; Gaidai, O.; Wang, F.; Xu, X.; Zhang, T.; Deng, H. Artificial Neural Network-Based Prediction of the Extreme Response of Floating Offshore Wind Turbines under Operating Conditions. J. Mar. Sci. Eng. 2023, 11, 1807. [Google Scholar] [CrossRef]
  46. Zhang, Y.; Yang, X.; Liu, S. Data-driven predictive control for floating offshore wind turbines based on deep learning and multi-objective optimization. Ocean. Eng. 2022, 266, 112820. [Google Scholar] [CrossRef]
  47. Barooni, M.; Velioglu Sogut, D. Forecasting Pitch Response of Floating Offshore Wind Turbines with a Deep Learning Model. Clean Technol. 2024, 6, 418–431. [Google Scholar] [CrossRef]
  48. Gräfe, M.; Pettas, V.; Cheng, P.W. Data-driven forecasting of FOWT dynamics and load time series using lidar inflow measurements. J. Phys. Conf. Ser. 2024, 2767, 032025. [Google Scholar] [CrossRef]
  49. Deng, S.; Ning, D.; Mayon, R. The motion forecasting study of floating offshore wind turbine using self-attention long short-term memory method. Ocean. Eng. 2024, 310, 118709. [Google Scholar] [CrossRef]
  50. Ye, Y.; Wang, L.; Wang, Y.; Qin, L. An EMD-LSTM-SVR model for the short-term roll and sway predictions of semi-submersible. Ocean. Eng. 2022, 256, 111460. [Google Scholar] [CrossRef]
  51. Song, B.; Zhou, Q.; Chang, R. Short-term motion prediction of FOWT based on time-frequency feature fusion LSTM combined with signal decomposition methods. Ocean. Eng. 2025, 317, 120046. [Google Scholar] [CrossRef]
  52. Mainini, L.; Diez, M. Digital Twins and their Mathematical Souls. In Proceedings of the STO-MP-AVT 369 Research Symposium on Digital Twin Technology Development and Application for Tri-Service Platforms and Systems, NATO Science and Technology Organization, Båstad, Sweden, 10–12 October 2023. [Google Scholar]
  53. Schmid, P.J. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 2010, 656, 5–28. [Google Scholar] [CrossRef]
  54. Kutz, J.; Brunton, S.; Brunton, B.; Proctor, J. Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems; SIAM—Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2016. [Google Scholar]
  55. Mezić, I. Koopman Operator, Geometry, and Learning of Dynamical Systems. Not. Am. Math. Soc. 2021, 68, 1087–1105. [Google Scholar]
  56. Arbabi, H.; Mezić, I. Ergodic Theory, Dynamic Mode Decomposition, and Computation of Spectral Properties of the Koopman Operator. SIAM J. Appl. Dyn. Syst. 2017, 16, 2096–2126. [Google Scholar] [CrossRef]
  57. Brunton, S.L.; Brunton, B.W.; Proctor, J.L.; Kaiser, E.; Kutz, J.N. Chaos as an intermittently forced linear system. Nat. Commun. 2017, 8, 19. [Google Scholar] [CrossRef]
  58. Kamb, M.; Kaiser, E.; Brunton, S.L.; Kutz, J.N. Time-Delay Observables for Koopman: Theory and Applications. SIAM J. Appl. Dyn. Syst. 2020, 19, 886–917. [Google Scholar] [CrossRef]
  59. Serani, A.; Dragone, P.; Stern, F.; Diez, M. On the use of dynamic mode decomposition for time-series forecasting of ships operating in waves. Ocean. Eng. 2023, 267, 113235. [Google Scholar] [CrossRef]
  60. Proctor, J.L.; Brunton, S.L.; Kutz, J.N. Dynamic mode decomposition with control. SIAM J. Appl. Dyn. Syst. 2016, 15, 142–161. [Google Scholar]
  61. Proctor, J.L.; Brunton, S.L.; Kutz, J.N. Generalizing Koopman Theory to Allow for Inputs and Control. SIAM J. Appl. Dyn. Syst. 2018, 17, 909–930. [Google Scholar]
  62. Brunton, S.L.; Budišić, M.; Kaiser, E.; Kutz, J.N. Modern Koopman Theory for Dynamical Systems. SIAM Rev. 2022, 64, 229–340. [Google Scholar] [CrossRef]
  63. Zawacki, C.C.; Abed, E.H. Dynamic Mode Decomposition for Control Systems with Input Delays. IFAC-PapersOnLine 2023, 56, 97–102. [Google Scholar] [CrossRef]
  64. Dylewsky, D.; Barajas-Solano, D.; Ma, T.; Tartakovsky, A.M.; Kutz, J.N. Stochastically Forced Ensemble Dynamic Mode Decomposition for Forecasting and Analysis of Near-Periodic Systems. IEEE Access 2022, 10, 33440–33448. [Google Scholar] [CrossRef]
  65. Mohan, N.; Soman, K.; Kumar, S.S. A data-driven strategy for short-term electric load forecasting using dynamic mode decomposition model. Appl. Energy 2018, 232, 229–244. [Google Scholar] [CrossRef]
  66. Al-Jiboory, A.K. Adaptive quadrotor control using online dynamic mode decomposition. Eur. J. Control 2024, 80, 101117. [Google Scholar] [CrossRef]
  67. Dawson, S.T.; Schiavone, N.K.; Rowley, C.W.; Williams, D.R. A Data-Driven Modeling Framework for Predicting Forces and Pressures on a Rapidly Pitching Airfoil. In Proceedings of the 45th AIAA Fluid Dynamics Conference, Dallas, TX, USA, 22–26 June 2015; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2015. Chapter AIAA 2015-2767. pp. 1–14. [Google Scholar] [CrossRef]
  68. Rowley, C.W.; Mezić, I.; Bagheri, S.; Schlatter, P.; Hennigson, D.S. Spectral analysis of nonlinear flows. J. Fluid Mech. 2009, 641, 115–127. [Google Scholar] [CrossRef]
  69. Tang, Z.; Jiang, N. Dynamic mode decomposition of hairpin vortices generated by a hemisphere protuberance. Sci. China Phys. Mech. Astron. 2012, 55, 118–124. [Google Scholar] [CrossRef]
  70. Semeraro, O.; Bellani, G.; Lundell, F. Analysis of time-resolved PIV measurements of a confined turbulent jet using POD and Koopman modes. Exp. Fluids 2012, 53, 1203–1220. [Google Scholar] [CrossRef]
  71. Song, G.; Alizard, F.; Robinet, J.C.; Gloerfelt, X. Global and Koopman modes analysis of sound generation in mixing layers. Phys. Fluids 2013, 25, 124101. [Google Scholar] [CrossRef]
  72. Proctor, J.L.; Eckhoff, P.A. Discovering dynamic patterns from infectious disease data using dynamic mode decomposition. Int. Health 2015, 7, 139–145. [Google Scholar] [CrossRef]
  73. Brunton, B.W.; Johnson, L.A.; Ojemann, J.G.; Kutz, J.N. Extracting spatial—Temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. J. Neurosci. Methods 2016, 258, 1–15. [Google Scholar] [CrossRef]
  74. Mann, J.; Kutz, J.N. Dynamic mode decomposition for financial trading strategies. Quant. Financ. 2016, 16, 1643–1655. [Google Scholar] [CrossRef]
  75. Diez, M.; Serani, A.; Campana, E.F.; Stern, F. Time-series forecasting of ships maneuvering in waves via dynamic mode decomposition. J. Ocean. Eng. Mar. Energy 2022, 8, 471–478. [Google Scholar] [CrossRef]
  76. Diez, M.; Gaggero, M.; Serani, A. Data-driven forecasting of ship motions in waves using machine learning and dynamic mode decomposition. Int. J. Adapt. Control Signal Process. 2024. [Google Scholar] [CrossRef]
  77. Diez, M.; Serani, A.; Gaggero, M.; Campana, E.F. Improving Knowledge and Forecasting of Ship Performance in Waves via Hybrid Machine Learning Methods. In Proceedings of the 34th Symposium on Naval Hydrodynamics, Washington, DC, USA, 26 June–1 July 2022. [Google Scholar]
  78. Rialti, M.; Cimatti, G. Aerogeneratore Mod. TN535 Descrizione Generale; Tozzi Green: Mezzano, Italy, 2016. [Google Scholar]
  79. Copernicus Marine Service. Mediterranean Sea Significant Wave Height extreme from Reanalysis; Copernicus Marine Service: Ramonville-Saint-Agne, France, 2021. [Google Scholar] [CrossRef]
  80. Schmid, P.; Sesterhenn, J. Dynamic Mode Decomposition of Numerical and Experimental Data. In Sixty-First Annual Meeting of the APS Division of Fluid Dynamics, San Antonio, TX, USA, 23–25 November 2008; Cambridge University Press: San Antonio, TX, USA, 2008. [Google Scholar]
  81. Koopman, B.O. Hamiltonian Systems and Transformation in Hilbert Space. Proc. Natl. Acad. Sci. USA 1931, 17, 315–318. [Google Scholar] [CrossRef]
  82. Tu, J.H.; Rowley, C.W.; Luchtenburg, D.M.; Brunton, S.L.; Kutz, J.N. On dynamic mode decomposition: Theory and applications. J. Comput. Dyn. 2014, 1, 391–421. [Google Scholar] [CrossRef]
  83. Marusic, I. Dynamic mode decomposition for analysis of time-series data. J. Fluid Mech. 2024, 1000, F7. [Google Scholar] [CrossRef]
  84. Mezić, I. On Numerical Approximations of the Koopman Operator. Mathematics 2022, 10, 1180. [Google Scholar] [CrossRef]
  85. Otto, S.E.; Rowley, C.W. Linearly Recurrent Autoencoder Networks for Learning Dynamics. SIAM J. Appl. Dyn. Syst. 2019, 18, 558–593. [Google Scholar] [CrossRef]
  86. Takeishi, N.; Kawahara, Y.; Yairi, T. Learning Koopman invariant subspaces for dynamic mode decomposition. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 4–9 December 2017; NIPS’17. pp. 1130–1140. [Google Scholar]
  87. Lusch, B.; Kutz, J.N.; Brunton, S.L. Deep learning for universal linear embeddings of nonlinear dynamics. Nat. Commun. 2018, 9, 4950. [Google Scholar] [CrossRef] [PubMed]
  88. Brunton, S.L.; Brunton, B.W.; Proctor, J.L.; Kutz, J.N. Koopman Invariant Subspaces and Finite Linear Representations of Nonlinear Dynamical Systems for Control. PLoS ONE 2016, 11, e0150171. [Google Scholar] [CrossRef]
  89. Pan, S.; Duraisamy, K. On the structure of time-delay embedding in linear models of non-linear dynamical systems. Chaos Interdiscip. J. Nonlinear Sci. 2020, 30, 073135. [Google Scholar] [CrossRef]
  90. Marlantes, K.E.; Bandyk, P.J.; Maki, K.J. Predicting ship responses in different seaways using a generalizable force correcting machine learning method. Ocean. Eng. 2024, 312, 119110. [Google Scholar] [CrossRef]
  91. Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef]
  92. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar]
  93. Serani, A.; Diez, M.; Aram, S.; Wundrow, D.; Drazen, D.; McTaggart, K. Model Order Reduction of 5415M in Irregular Waves via Dynamic Mode Decomposition: Computational Models’ Diagnostics, Forecasting, and System Identification Capabilities. In Proceedings of the 35th Symposium on Naval Hydrodynamics, Nantes, France, 8–12 July 2024. [Google Scholar]
  94. Rains, J.; Wang, Y.; House, A.; Kaminsky, A.L.; Tison, N.A.; Korivi, V.M. Constrained optimized dynamic mode decomposition with control for physically stable systems with exogeneous inputs. J. Comput. Phys. 2024, 496, 112604. [Google Scholar] [CrossRef]
  95. Serani, A.; Diez, M.; van Walree, F.; Stern, F. URANS analysis of a free-running destroyer sailing in irregular stern-quartering waves at sea state 7. Ocean. Eng. 2021, 237, 109600. [Google Scholar] [CrossRef]
  96. Jeffrey, C. Miecznikowski, D.W.; Hutson, A. Bootstrap MISE Estimators to Obtain Bandwidth for Kernel Density Estimation. Commun. Stat. Simul. Comput. 2010, 39, 1455–1469. [Google Scholar] [CrossRef]
  97. Silverman, B.W. Density Estimation for Statistics and Data Analysis; Routledge: London, UK, 2018. [Google Scholar] [CrossRef]
  98. Amsallem, D.; Farhat, C. Interpolation Method for Adapting Reduced-Order Models and Application to Aeroelasticity. AIAA J. 2008, 46, 1803–1813. [Google Scholar] [CrossRef]
  99. Amsallem, D.; Farhat, C. An Online Method for Interpolating Linear Parametric Reduced-Order Models. SIAM J. Sci. Comput. 2011, 33, 2169–2198. [Google Scholar] [CrossRef]
Figure 1. (a) Aerial view of In-MaRELab with the Hexafloat FOWT during the tests at sea in 2021. (b) View of the Hexafloat FOWT during the towing stage from the shipyard to the test site in 2024.
Figure 1. (a) Aerial view of In-MaRELab with the Hexafloat FOWT during the tests at sea in 2021. (b) View of the Hexafloat FOWT during the towing stage from the shipyard to the test site in 2024.
Jmse 13 00656 g001
Figure 2. Tozzi Nord TN535 10 kW wind turbine power curve [78].
Figure 2. Tozzi Nord TN535 10 kW wind turbine power curve [78].
Jmse 13 00656 g002
Figure 3. Sketch of the Hankel-DMD modeling approach for short-term forecasting (nowcasting).
Figure 3. Sketch of the Hankel-DMD modeling approach for short-term forecasting (nowcasting).
Jmse 13 00656 g003
Figure 4. Sketch of the Hankel-DMD modeling approach for system identification.
Figure 4. Sketch of the Hankel-DMD modeling approach for system identification.
Jmse 13 00656 g004
Figure 5. Hankel-DMD for nowcasting (a) and Hankel-DMDc for system identification (b) learning–prediction–assessment flowchart.
Figure 5. Hankel-DMD for nowcasting (a) and Hankel-DMDc for system identification (b) learning–prediction–assessment flowchart.
Jmse 13 00656 g005
Figure 6. DMD complex modal frequencies (a), modal participation spectrum (b) and cumulative values (c), and first modes shapes (d).
Figure 6. DMD complex modal frequencies (a), modal participation spectrum (b) and cumulative values (c), and first modes shapes (d).
Jmse 13 00656 g006
Figure 7. Hankel-DMD boxplot of error metrics over the validation set for tested l t r and l d x , l t e = 4 T ^ (∼30 s). Diamonds indicate the average value of the respective configuration.
Figure 7. Hankel-DMD boxplot of error metrics over the validation set for tested l t r and l d x , l t e = 4 T ^ (∼30 s). Diamonds indicate the average value of the respective configuration.
Jmse 13 00656 g007
Figure 8. Standardized time series nowcasting by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMD. Selected sequence 1.
Figure 8. Standardized time series nowcasting by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMD. Selected sequence 1.
Jmse 13 00656 g008
Figure 9. Standardized time series nowcasting by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMD. Selected sequence 2.
Figure 9. Standardized time series nowcasting by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMD. Selected sequence 2.
Jmse 13 00656 g009
Figure 10. Standardized time series nowcasting by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMD. Selected sequence 3.
Figure 10. Standardized time series nowcasting by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMD. Selected sequence 3.
Jmse 13 00656 g010
Figure 11. Standardized time series nowcasting by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMD. Selected sequence 4.
Figure 11. Standardized time series nowcasting by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMD. Selected sequence 4.
Jmse 13 00656 g011
Figure 12. Error metrics comparison, Hankel-DMD vs. Bayesian Hankel-DMD nowcasting, l t e = 4 T ^ (∼30 s).
Figure 12. Error metrics comparison, Hankel-DMD vs. Bayesian Hankel-DMD nowcasting, l t e = 4 T ^ (∼30 s).
Jmse 13 00656 g012
Figure 13. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 5 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Figure 13. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 5 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Jmse 13 00656 g013
Figure 14. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 10 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Figure 14. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 10 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Jmse 13 00656 g014
Figure 15. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 20 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Figure 15. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 20 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Jmse 13 00656 g015
Figure 16. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 30 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Figure 16. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 30 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Jmse 13 00656 g016
Figure 17. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 40 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Figure 17. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 40 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Jmse 13 00656 g017
Figure 18. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 50 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Figure 18. Hankel-DMDc boxplot of error metrics over the validation set for tested l t r and l d x , with l d u / T ^ = 50 . Diamonds indicate the average value of the respective configuration. l t e = 20 T ^ (∼146 s).
Jmse 13 00656 g018
Figure 19. Standardized time series prediction by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMDc. Selected sequence 1.
Figure 19. Standardized time series prediction by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMDc. Selected sequence 1.
Jmse 13 00656 g019
Figure 20. Standardized time series prediction by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMDc. Selected sequence 2.
Figure 20. Standardized time series prediction by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMDc. Selected sequence 2.
Jmse 13 00656 g020
Figure 21. Standardized time series prediction by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMDc. Selected sequence 3.
Figure 21. Standardized time series prediction by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMDc. Selected sequence 3.
Jmse 13 00656 g021
Figure 22. Standardized time series prediction by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMDc. Selected sequence 4.
Figure 22. Standardized time series prediction by deterministic (hyperparameters for best average metrics) and Bayesian Hankel-DMDc. Selected sequence 4.
Jmse 13 00656 g022
Figure 23. Error metrics comparison, Hankel-DMDc vs. Bayesian Hankel-DMDc system identification, l t e = 20 T ^ (∼146 s).
Figure 23. Error metrics comparison, Hankel-DMDc vs. Bayesian Hankel-DMDc system identification, l t e = 20 T ^ (∼146 s).
Jmse 13 00656 g023
Figure 24. Probability density function comparison between measured data and the expected value of the Bayesian Hankel-DMDc prediction on bootstrapped sequences for each variable. Shaded areas indicate the 95% confidence interval of the two PDFs.
Figure 24. Probability density function comparison between measured data and the expected value of the Bayesian Hankel-DMDc prediction on bootstrapped sequences for each variable. Shaded areas indicate the 95% confidence interval of the two PDFs.
Jmse 13 00656 g024
Table 1. List of the hyperparameter settings tested for Hankel-DMD nowcasting.
Table 1. List of the hyperparameter settings tested for Hankel-DMD nowcasting.
l t r [-] T ^ 2 T ^ 4 T ^ 8 T ^ 16 T ^
l d x [-] 0.5 T ^ T ^ 2 T ^ 4 T ^ 8 T ^ 16 T ^
n t r [-]163264128256512
n d x [-]163264128256512
Table 2. List of the hyperparameter settings tested for Hankel-DMDc system identification algorithm.
Table 2. List of the hyperparameter settings tested for Hankel-DMDc system identification algorithm.
l t r [-] 50 T ^ 75 T ^ 100 T ^ 125 T ^ 150 T ^ 175 T ^ 200 T ^
l d x [-] 0.5 T ^ T ^ 2 T ^ 3 T ^ 4 T ^ 5 T ^
l d u [-] 5 T ^ 10 T ^ 20 T ^ 30 T ^ 40 T ^ 50 T ^
n t r [-]1600240032004000480056006400
n d x [-]16326496128160
n d u [-]16032064096012801600
Table 3. Summary of the best hyperparameter configurations for nowcasting as identified by the deterministic design of experiment and Bayesian setup, where l t e = 4 T ^ .
Table 3. Summary of the best hyperparameter configurations for nowcasting as identified by the deterministic design of experiment and Bayesian setup, where l t e = 4 T ^ .
NRMSENAMMAEJSD l t r l d x
(avg)(avg)(avg)
Best NRMSE 0.1240.0260.2002 T ^ 16 T ^
Best NAMMAE 0.1590.0150.0774 T ^ T ^
Best JSD 0.1680.0150.0708 T ^ 2 T ^
Bayesian0.1480.0150.075[4–16] T ^ l t r / 4
Table 4. Summary of the best hyperparameter configurations for system identification as identified by the deterministic design of experiment, l t e = 20 T ^ , and Bayesian setup.
Table 4. Summary of the best hyperparameter configurations for system identification as identified by the deterministic design of experiment, l t e = 20 T ^ , and Bayesian setup.
NRMSENAMMAEJSD l t r l d x l d u
(avg)(avg)(avg)
Best NRMSE 0.1180.0220.073200 T ^ 2 T ^ 5 T ^
Best NAMMAE 0.1490.0100.018200 T ^ 3 T ^ 40 T ^
Best JSD 0.1490.0110.017175 T ^ 4 T ^ 30 T ^
Bayesian0.1310.0110.021[175–200] T ^ [2–4] T ^ [20–40] T ^
Table 5. Expected value and 95% confidence lower bound, upper bound, and interval of JSD of bootstrapped time series.
Table 5. Expected value and 95% confidence lower bound, upper bound, and interval of JSD of bootstrapped time series.
ξ JSD  ( ξ ) q = 0.025q = 0.975U
EV
T 1 0.00390.00180.00720.0054
T 5 0.00210.00090.00430.0034
T 6 0.00420.00110.01070.0096
M 3 0.02180.01000.03760.0276
ϕ 0.00210.00090.00410.0032
θ 0.00370.00090.00860.0077
ϕ ˙ 0.00400.00110.00910.0080
θ ˙ 0.00210.00090.00400.0032
u ˙ 0.00680.00360.01100.0074
v ˙ 0.00790.00330.01760.0144
w ˙ 0.00470.00130.01130.0100
avg0.00580.00240.01140.0091
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Palma, G.; Bardazzi, A.; Lucarelli, A.; Pilloton, C.; Serani, A.; Lugni, C.; Diez, M. Analysis, Forecasting, and System Identification of a Floating Offshore Wind Turbine Using Dynamic Mode Decomposition. J. Mar. Sci. Eng. 2025, 13, 656. https://doi.org/10.3390/jmse13040656

AMA Style

Palma G, Bardazzi A, Lucarelli A, Pilloton C, Serani A, Lugni C, Diez M. Analysis, Forecasting, and System Identification of a Floating Offshore Wind Turbine Using Dynamic Mode Decomposition. Journal of Marine Science and Engineering. 2025; 13(4):656. https://doi.org/10.3390/jmse13040656

Chicago/Turabian Style

Palma, Giorgio, Andrea Bardazzi, Alessia Lucarelli, Chiara Pilloton, Andrea Serani, Claudio Lugni, and Matteo Diez. 2025. "Analysis, Forecasting, and System Identification of a Floating Offshore Wind Turbine Using Dynamic Mode Decomposition" Journal of Marine Science and Engineering 13, no. 4: 656. https://doi.org/10.3390/jmse13040656

APA Style

Palma, G., Bardazzi, A., Lucarelli, A., Pilloton, C., Serani, A., Lugni, C., & Diez, M. (2025). Analysis, Forecasting, and System Identification of a Floating Offshore Wind Turbine Using Dynamic Mode Decomposition. Journal of Marine Science and Engineering, 13(4), 656. https://doi.org/10.3390/jmse13040656

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop