Blood Pressure Morphology Assessment from Photoplethysmogram and Demographic Information Using Deep Learning with Attention Mechanism

Arterial blood pressure (ABP) is an important vital sign from which it can be extracted valuable information about the subject’s health. After studying its morphology it is possible to diagnose cardiovascular diseases such as hypertension, so ABP routine control is recommended. The most common method of controlling ABP is the cuff-based method, from which it is obtained only the systolic and diastolic blood pressure (SBP and DBP, respectively). This paper proposes a cuff-free method to estimate the morphology of the average ABP pulse (ABPM¯) through a deep learning model based on a seq2seq architecture with attention mechanism. It only needs raw photoplethysmogram signals (PPG) from the finger and includes the capacity to integrate both categorical and continuous demographic information (DI). The experiments were performed on more than 1100 subjects from the MIMIC database for which their corresponding age and gender were consulted. Without allowing the use of data from the same subjects to train and test, the mean absolute errors (MAE) were 6.57 ± 0.20 and 14.39 ± 0.42 mmHg for DBP and SBP, respectively. For ABPM¯, R correlation coefficient and the MAE were 0.98 ± 0.001 and 8.89 ± 0.10 mmHg. In summary, this methodology is capable of transforming PPG into an ABP pulse, which obtains better results when DI of the subjects is used, potentially useful in times when wireless devices are becoming more popular.


Introduction
Cardiovascular diseases (CVDs) remain the most common cause of morbidity and mortality worldwide [1]. One of its main risk factors which reaches at least 1.3 billion people is high blood pressure (BP) or hypertension [2]. Unfortunately, most of the population is not aware of suffering a CVD until an event such as arrhythmia, heart attack, or stroke occurs. In this context, regular BP monitoring becomes an essential strategy of prevention, detection, and control for health.
Methods for measuring the BP are divided into noninvasive and invasive. The traditional noninvasive method involves the sphygmomanometry technique. In general, the measurement is carried out by a physician or different members of a clinical staff, and the subject to be measured rests for a few minutes in order to stabilize his BP. As it depends on an inflatable cuff, it does not serve as a continuous measurement method due to only two values are obtained: diastolic BP (DBP) and systolic BP (SBP). Invasive methods are performed by inserting intravascular catheters with pressure transducers. They have the disadvantage of exposing the subject to bleeding and infections. The advantage is the access to the continuous arterial BP (ABP) morphology, the gold standard for monitoring the BP. Additionally, a noninvasive practice to estimate the ABP is the tonometry technique in combination with the cuff sphygmomanometer. The tonometry technique provides the estimation of the waveform and the cuff sphygmomanometer provides calibrated values [3].
ABP morphology (ABPM) is defined by the mechanical interaction between the blood flow, originated in the hearth, and the arteries. The DBP is referenced to the minimum value of BP and it is related to the aortic valve opening to blood ejection. The SBP is defined as the maximum pressure value applied by the left ventricle in the heart's cycle. It is the result of the interaction between the blood ejected into the arterial tree and the reflected waves [4]. The dicrotic notch (DN) represents the closure of the aortic valve and is used to calculate the duration of the ejection period and the beginning of the diastolic phase. ABPM can suffer of local alterations, such as those induced by the respiratory rhythm or specific vascular test maneuvers. On the other hand, permanent alterations can be observed as a result of advanced age or the appearance of vascular pathologies such as arterial stiffness [5]. In addition, the ABMP changes according to the site of the arterial tree at which it is measured. However, if both the waveform and calibration values are known, it is possible to use generalized transfer functions to estimate the ABPM at another site [6]. Furthermore, it is known that ABPM may be more predictive of cardiovascular events than just cuff-pressure values [7][8][9], may alert of CVDs such as diastolic dysfunction [10], or could be a valuable measure of the response to the treastmen of the obstructive sleep apnea [11]. In this sense, through the analysis of the ABPM it is possible to derive many features related to the health of the cardiovascular system [3]. Some of them correspond specifically to ABP values and other ones to temporal occurrences.
An important temporal feature introduced in [12] and studied more in depth in Mukkamala et al. [13] is the pulse transit time (PTT). It is defined as the time between the beginning of a pulse originated in the heart and its arrival at a specific point on the periphery of the artery tree. PTT shows a relationship with arterial pulse wave velocity (PWV) based on Moens-Korteweg equation: in which E is the elastic module, h is the arterial wall thickness, ρ is the blood density, and r is the radius of a vessel. And PWV can be related to ABP by Hughes equation [14]: where both α > 0 and E 0 are subject-specific constants. E 0 corresponds to the zero-pressure modulus of the vessel wall and P reference to BP. PTT can also be defined as the difference between pulse arrival time (PAT) and the pre-ejection period (PEP). In this sense, PAT can be assessed as the time delay between the electrocardiogram's (ECG) R-peak and the BP pulse onset. However, PAT is not expressed in Equation (1) and cannot be related directly to BP. Furthermore, it is shown that PEP represents a significant and variable proportion of PTT, from 10% to 30% [15]. Nevertheless, PAT is widely used by researchers as a good approximation of PTT, mainly due to the ease of its measurement [13]. Following this approach, in recent years there has been an increase in the amount of publications regarding the estimation of BP values in a noninvasive and real-time way, also called "cuff-less calibration". In this context, finger photoplethysmography (PPG) signal, due to its similarities in time and frequency domains [16] with BP, emerges as an interesting measurement for estimating BP. PPG is an optical device that measures the change of blood volume in the vessel. Its advantages are the low-cost, simplicity, and portability, very attractive characteristics for wearable devices [17]. Its disadvantages are the sensitivity to noise and artifact due to subject movements; therefore, a signal processing in general must be applied.
Unfortunately, PTT approach to estimate BP cannot be directly applied to PPG signal morphology. In order to deal with this issue, different approaches based on machine learning such as linear regression [18], AdaBoost [19], classical fully-connected neural network (NN) [20], and Gaussian process regression (GPR) [21] were proposed to modeling the subject-specific relation between PPG and BP. These techniques were focused in the feature extraction between PPG and ECG. In particular, in Monte-Moreno [22] and Ruiz-Rodríguez et al. [23] techniques called Random Forest (RF) and Deep Belief Network-Restricted Boltzmann Machine, respectively, were applied to estimate SBP and DBP extracting features from the PPG. On the contrary, with the disruption of deep learning techniques, feature extraction step could be relegated to the NN. In Eom et al. [24], raw ECG, PPG and ballistocardiogram (BCG) signals were used in a combined convolutional NN (CNN) and recurrent NN (RNN) model. Furthermore, some studies proposed to work only with PPG time-series [25] and its derivatives [26]. In Liang et al. [25], a pretrained CNN was used to classify three levels of hypertension based on the scalogram from the PPG and in Slapničar et al. [26], a spectro-temporal ResNet model was proposed to estimate the DBP and SBP values using the PPG in conjunction of the first and second derivatives (PPG and PPG, respectively). The latter was also a combination of CNN and RNN with gated recurrent units (GRU). Particularly, to the best of our knowledge, few studies aim to the hard task of directly estimate the continuous ABP. In Sideris et al. [27] and Sadrawi et al. [28] a RNN model, with long short-term memory (LSTM) units, and deep convolutional auto-encoder (DCAE) model, respectively, were proposed to transfer signals from PPG to ABP. General surveys on the existing and emerging approaches on this field can be found in Hosanee et al. [29] and El-Hajj and Kyriacou [30].
In this context and considering the recommendations from Elgendi et al. [17], the collaborative spirit from Slapničar et al. [26] and the requirements when working with the MIMIC-III Matched Waveform Database (MWDB) and MIMIC-III Clinical Database (CDB) [31], the contributions of our work can be summarized in the next topics: The DI used due to requirements from [31] is not shared, but the codes to extract it if the request to access to MIMIC-III CDB is accepted, are also available. Figure 1 shows a block diagram of the proposed methodology. Data for this work come from two public available databases: MIMIC-III MWDB and MIMIC-III CDB. The first one contains over 20,000 waveform records digitized at 125 Hz from more than 10,000 distinct patients in intensive care units and the second one includes information such as demographics, laboratory and microbiology test results, cardiology and radiology reports, and diagnostics. In preprocessing stage, records from MIMIC-III MWDB with invasive ABP and fingertip PPG signals are selected, then their corresponding subject's age and gender were obtained from MIMIC-III CDB. In processing stage only segments with enough signal quality (SQ) are kept and each morphology of the average ABP pulse (ABPM) is computed. Deep learning stage compromises the model architecture, hyperparameters settings, and training phase to get the estimated ABPM ( ABPM) from PPG signal. Finally, the values and time occurrences from ABPM are evaluated. Each stage will be detailed in the next subsections.

Preprocessing
The ID of the records with a minimum duration of 15 min and with both ABP and PPG signals were preserved. A 10 min interval was defined to consider the subject in a rest condition and 5 min was defined as a gap between different segments of the record. In this work only the age and gender were extracted from MIMIC-III CDB. The age of analysis was set between 18 and 89 years.

Processing
In Figure 2a is summarized the processing stage. Part of this section was inspired in the released code from Slapničar et al. [26]. Each record was loaded with the WFDB Toolbox [32] for Matlab and two 15-s segments spaced 5 min apart were analyzed (Figure 2b). Each time a segment was rejected for not meeting the requirements described below, a minute was waited before reanalyzing two new segments. If a record was not able to meet the criteria, it was excluded from the analysis and the next record was evaluated. If the criterion was met, a structured file containing both the raw segments and the processed pulses was generated.
The main steps were called Flat, Peak, PPG-SQ, and ABP-SQ. Several thresholds were set to ensure the quality for each segment and pulses, at least equal to those from Slapničar et al. [26]. Particularly, for PPG-SQ and ABP-SQ others were added. The pulses duration were limited in the range [0.5, 1.5] s considering normal physiological limits at rest. The number of pulses per segment analyzed was limited to [10,30]. To be accepted, the difference SBP-DBP and the moment coefficient of skewness [33] had to be higher than 10 mmHg and zero, respectively.
More in detail, Flat and Peak detect null data and saturated points in valleys and peaks of signals respectively. Then, a Butterworth filter with cutoff frequencies [0. 5,8] Hz and MinMax normalization were performed only to the PPG segment. A pulse-by-pulse analysis was done with the marker proposed in Li et al. [34]. It is important to clarify that PPG-SQ corresponds to part of the feature extraction step in Slapničar et al. [26], but for this work it was only used for signal quality reasons. Once PPG-SQ was succeed, raw PPG segment was saved. The ABPM were calculated in ABP-SQ step. The ABP pulses were synchronized regarding their onsets. For each time-step t = i∆t, the mean (µ ABPM i ) and standard deviation (σ ABPM i ) was calculated. Finally, ABPM was computed only with the points in range µ ABPM i ± 1.25σ ABPM i , as is shown in Figure 2c. Regarding the deep learning explained below, the class of each point related to different cardiac cycle stages was defined as one of the following intervals:  Once all the records were processed, after a visual inspection, ABP and pulse duration were limited to 180 mmHg and 1.2 s, respectively. In addition, only pulses with skewness greater than 0.2 were accepted. At this point, there were 10,696 segments corresponding to 1131 subjects, where 169 subjects had more than 50% of segments. To reduce subject's bias, the quantity of segments per subject was limited to 10. Finally, there were 6478 segments, where 333 subjects represent the 50% of segments ( Figure 3a). Figure  3b,c show the age and gender distributions and DBP and SBP distributions, respectively, of the selected dataset. They were 464 females and 667 and males, while mean and standard deviation of age, DBP, and SBP were 58.6 ± 14.1 years, 64.48 ± 9.51 mmHg, and 130.84 ± 20.27 mmHg, respectively. The raw PPG segment saved during processing was filtered before being used as input. A band-pass Butterworth filter, with cutoff frequencies [0.5, 45] Hz, was applied. As mentioned before, the PPG provides information that could improve ABPM. PPG was computed using a Savitzky-Golay filter [35]. The window size and the polynomial degree was 7 and 3, respectively. In addition, one second was removed at the beginning and at the end of the segment to avoid artifacts caused by the two filters just mentioned. In summary, the dataset available for the deep learning stage was constituted by 6478 segments, 13 s each one, equivalent to 23.4 h.

Deep Learning
The proposed deep learning architecture is inspired by seq2seq encoder-decoder [36] models with attention mechanism [37,38] on the natural language processing domain. Before the detailed description of it in Section 2.3.3, a few concepts in relation with this model are described in Section 2.3.1. Furthermore, some considerations about the input data are presented in Section 2.3.2.

RNN Encoder-Decoder
Encoder reads each input from a variable source sequence and encodes it into a fixedlength vector representation, also called hidden state. Then, the decoder starts initializing its own hidden state with the encoder one, and then generates at each time an output. Figure 4a shows an illustration of the encoder-decoder model using RNN, where the type of RNN selected for this work is called "gated recurrent unit" (GRU). The structure of GRU is shown in Figure 4b.  GRU structure was proposed by Cho et al. [36] to mitigate the vanishing/exploding gradient of the RNN. Input vectors of each GRU unit are the previous hidden state h t−1 and the current input x t , while the current hidden state h t correspond to the output. In this sense, h t is computed according to relations given by Equation (3): where r t and z t denote the reset gate and the update gate. W z , W r and W h are learnable weight matrices and h t is the proposed hidden state. σ(.) and tanh(.) correspond to the logistic sigmoid and hyperbolic tangent function, respectively and ⊗ is the symbol for element-wise multiplication.
Both encoder and decoder are RNNs and they are jointly trained to predict the next value of a target sequence given a source sequence. In particular, two loss functions were used to achieve a multitask objective. To accelerate the training, as the objective was to predict only one ABPM, a mask vector with ones and zeros was created. ABPM v error was masked with it to only penalize the nonrepeated ABPM v adding 0.12 s (15 time-steps). In Section 2.3.4 this mask will be considered. An example of the limit of the mask is shown in Figure 5 Figure 6 shows the model architecture. It is constituted by three main parts: encoder, decoder, and attention modules. The encoder consists of three bidirectional GRU (Bi-GRU) layers, while decoder consists of three GRU and two multiperceptron layers (MPL) (MPL v and MPL c , respectively). Both encoder and decoder have dense connections [39] to improve the information flow between layers (blue arrows, Figure 6). Input X l , with l ∈ [1, L], is the mentioned 5-s PPG and PPG' input signal. The whole encoder outputs (h s ) go to attention module. In addition, the last hidden state (h s L ) from each encoder GRU layer is used to initialize the hidden states of the corresponding decoder GRU layer (red arrows, Figure 6). The output of last decoder GRU layer (h t i ) is sent to the both attention module and MPL c layer. Context vector (c i ) and h t i are concatenated and transferred to MPL v . Finally, MPL v and MPL c outputs are concatenated (orange arrows, Figure 6) to produce a prediction time-step (y i ). Age and gender demographic information vector (X DI ) is concatenated at each time-step with y i to conform the decoder inputs (y i&DI ). In particular, y 0 is a vector full of ones used to indicate the start of a prediction. In detail, the attention mechanism used in this work refers to the Luong Attention [38], where c i is the weighted sum between an attention weight vector (a i ) and h t i :

Model Architecture
where a i is computed and normalized using the softmax function: where h s is each encoder output and score(h t i , h s ) is the general context-based function: in which W is also a weight matrix of a MPL.

Loss Functions
As mentioned before, the model was trained to produce ABPM while the difference between ABPM c and ABPM c was penalized with the categorical cross-entropy [40] function (CE): where, for Equations (7) and (8), N, M, and T correspond, respectively, to the number of samples, the mask length previously mentioned and the fixed input length. Finally, the training loss function was defined as: in which λ was a constant empirically determined to 0.01.

Hyperparameters and Experimental Settings
The , respectively. In particular, MPL v output was previously normalized with a softmax function. Adam optimizer [42] was chosen to update the model parameters and the learning rate (LR) value was 10 −3 . LR was scaled by 50% after a patience of 25 epochs without improvement in the loss. Training was stopped when patience reaches 50 epochs. The batch size was set to 48.
Weights of MPL v , MPL c and Attention layers are initialized from U (− √ w, √ w) and weights for GRU layers are initialized from U (− In particular, for the weights corresponding for the transition matrix of the GRU layers (W hz , W hr , W hh , from Equation (3)) a random orthogonal initialization scheme was selected [43].
Three scenarios were proposed to evaluate the impact of X DI and the split of segments by subject. For the first and second scenarios, the mixing of segments from the same subject between train and test sets (Mix no ) was not allowed. For the first scenario neither X DI was provided. Then, for the second scenario X DI information was added. Finally, for the third scenario, it was also allowed that the train and test sets had segments from same subjects (Mix yes ). The scenarios were named: Mix no , Mix no + DI and Mix yes + DI, respectively. For scenarios Mix no and Mix no + DI the test set was formed by the segments corresponding to 20% of the subjects. For scenario Mix yes + DI the test set was conformed by 20% of the segments, independently of the subjects. Each scenario was cross-validated 5 times.

Evaluation
where z i is the mean of z i and z i is the estimated value. Secondly, DN time occurrence (DN TO ) and pulse duration were also evaluated with RMSE, MAE, and R 2 metrics. DN TO and pulse duration were computed as last and first occurrence of classes C [DN,E] and C [ED] , respectively, in ABPM c . Finally, ABPM pulse values were evaluated with RMSE and MAE, while ABPM pulse waveforms were evaluated with the Pearson's coefficient of correlation (R). When ABPM and ABPM had different durations, the shorter one was considered for the evaluation. R is defined as: where x and y correspond to ABPM and ABPM, x and y theirs mean, and T the considered duration.    In particular, ABPM DBP and ABPM SBP assessment refers to a cuff-less calibration task. In ascending order, they were Mix no , Mix no + DI, and Mix yes + DI. Regarding the time occurrences assessment from Table 2, there was not a clear difference between Mix no and Mix no + DI scenarios. Despite the mean of the metrics being slightly better for the Mix no scenario, they also show a larger standard deviation. On the contrary, Mix yes + DI scenario show better results. Respecting the evaluation of waveforms and values of ABPM presented in Table 3 there was again a clear improvement in performance for the scenario Mix yes + DI, followed by scenarios Mix no + DI and Mix no .    Table 4 shows the results regarding the British Hypertension Society (BHS) standards [44] using prediction of each fold per scenario. BHS define thresholds (i.e., 5, 10, and 15 mmHg) to inform the cumulative error percentage and determine the grade of a device when the BP is measured. DBP estimation during Mix yes + DI scenario achieves grade B, requiring 3.4% for the range <5 mmHg to achieve grade A. Mix no and Mix no + DI scenarios achieve grade C, lacking 8.2% and 4.5%, respectively, for the range <5 mmHg to achieve grade B.

Results
Bland-Altman plots were performed using SBP and DBP predictions of each of the 5 fold per scenario. Bland-Altman results are shown in Table 5 in terms of mean (µ) and limits of agreement (µ ± 1.96 σ). In particular, Figure 9 shows regression plots, Bland-Altman plots, and histograms of errors corresponding to Mix no + DI scenario.

Discussion
In the present work, the ABPM was estimated by combining the time-series of the PPG and DI of each subject. Firstly, the ABPM signal was computed and paired to its corresponding PPG signal and DI. Secondly, a model with sequence-to-sequence architecture and attention mechanism was proposed to transfer the information from the optical domain to the pressure domain. Results show the capacity of the proposed method to simultaneously estimate both morphology and calibration values of the ABP signal.
To the best of our knowledge, a distinction is made in the literature between calibration methods. Depending on whether or not data from the same subject are used for training and testing, they are called calibration-based (cal-based) or calibration-free (cal-free), respectively. In this sense, hereafter Mix yes + DI and Mix no + DI scenarios refer to cal-based and cal-free, respectively. Table 6 presents a comparison, in calibration terms, with other studies. Nevertheless, because of different evaluation metrics, dataset sizes, and signal sources, the comparison is not easy and direct. Studies that reported lowest errors are those with fewer number of subjects and in which the restriction to use subject data in both training and training sets was not explicit or was not applied. Particularly, in Chan et al. [18], mean error (ME) was used as a metric and the dataset was unspecified. In Kurylyak et al. [20], despite that only PPG signal was used, the dataset consisted only of 15,000 beats and no information about number of subject was given. In Chowdhury et al. [21] the dataset consists of 226 records, with a signal duration of 2.1 s and corresponding to 126 subjects. Methods in Chan et al. [18], Kurylyak et al. [20] and Chowdhury et al. [21] use the feature extraction approach. On the contrary, in Eom et al. [24] a deep learning model with the capacity to take raw multi signal inputs was proposed. However, the dataset was composed of only 15 subjects, without restricting the use of data from the same subjects to train and test.
Works that have used largest amount of subjects were [19,22,23,26] (410, 572, 1000, and 510 subjects, respectively). In Monte-Moreno [22], estimations of SBP and DBP were obtained using only features extracted from the PPG, combined with the age, weight, and body mass index information of the subjects. The author did not make explicit any subject's data restriction (cal-based scenario). Assessments were reported in terms of R 2 metric and results reached a grade B under the standards. On the contrary, results from [19,22,26] reported results much more similar with those obtained in the present work and also expressed subject's data restriction for train and test set. In Ruiz-Rodríguez et al. [23], only a cal-free scenario was reported and errors were informed in terms of a Bland-Altman test. Limits of agreement for SBP and DBP were [−40.91, 34.94] and [−20.68, 13.38] mmHg, respectively, and mean values were −2.98 and −3.65 mmHg, respectively. Although they had a lot of clinical information available in their database, this information was not included in their model when estimating BP values. Particularly, in Kachuee et al. [19] were reported the lowest errors in both cal-free and cal-based scenarios. Nevertheless, ECG information was necessary jointly with the PPG time series, followed by a feature extraction step to get estimations. On the contrary, in Slapničar et al. [26], where a leave-one-subject-out experiments were performed, only PPG raw signal was necessary. The dataset used in Slapničar et al. [26] was nearly the half used in this work, and except for SBP MAE evaluation in the cal-based scenario, the results presented here are better. Additionally, in no case authors of studies [19,22,23,26] reported a limitation on the number of records per subject or the total duration per record. In these terms, we suggest our work is less biased. Table 7 shows a comparison between different methods and results that were focused on the continuous ABP and our results given in Table 3. In Sideris et al. [27] there were 42 subjects and records analyzed from MIMIC database, and each record was composed of two segments. Furthermore, a completely personalized approach was proposed, in which as many different models as subjects were created. On the contrary, in our approach just a single model needs to be trained. In Sadrawi et al. [28] the proposed DCAE model was trained with 18 subjects from closed data. Additionally, while DCAE model only accepted fixed input length, the methodology proposed in the present work does not have that limitation. Although in Sideris et al. [27] and Sadrawi et al. [28] MAE and RMSE values were lower than those in the present work, the number of subjects evaluated was lower and there was no subject's data restriction between train and test sets.  It is important to mention that while all PPG signals came from the finger, it is unknown from which specific site of the arterial tree BP signals were recorded. Nevertheless, future studies could improve the results by specifying the sites of the source and target signals. Furthermore, the information about devices and filters used during data collection is also unknown for both PPG and ABP signals. Therefore, in addition to the fact that the type of drug supplied or the existence of previous pathologies is also unknown, the scenario does not meet the standards of a rigorous medical protocol.
Finally, compared to the previous studies found in the literature, the presented architecture allows for the use of both raw signals and DI (age and gender) as inputs. An improvement in results can be observed when DI is considered (Tables 1 and 3). Furthermore, without any modification in the architecture, other characteristics of the subject could be incorporated, such as ethnicity, weight, or height. Despite that many of them are present in the MIMIC-III CDB, the final number of subjects with extra information and also good quality records was less than 30% of the total used. Pre-existing conditions such as diabetes, chronic kidney disease, smoking, and dyslipidemia also could be incorporated.

Conclusions
In this paper, a new deep learning architecture to estimate the average arterial blood pressure morphology (ABPM) is proposed. The proposed methodology, for each point that conforms the ABPM, estimates the blood pressure value and classifies it according to the stage of the cardiac cycle to which belongs. To the best of our knowledge, this is a contribution to the literature because most of the existing approaches only estimate diastolic and systolic values. The methodology presented here also allows simultaneous use of subject demographic information and raw photoplethysmogram signal from the finger as model input. Further studies are needed with more specific databases in order to expand the presented results. In addition, the source code is shared concerning the reproducibility of the results. Finally, as a potential research direction, this methodology could be adapted to mobile devices where only one source signal is required. Funding: This work was partially founded by Universidad Tecnológica Nacional (Grant: ICUTIBA7647 R&D projects) and by the ML-Cardyn project, cofunded by the European Union. The authors would like to thank Europe for its commitment in Champagne-Ardenne with the European Regional Development Fund (FEDER).

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations
The following abbreviations are used in this manuscript: ABP Arterial blood pressure ABPM Arterial blood pressure morphology ABPM Average arterial blood pressure pulse morphology ABPM Estimated arterial blood pressure pulse morphology