1. Introduction
The selection of the assessment procedure is always of great importance, especially in complex engineering problems. In the selection, first, the exact goal of the test has to be determined by the expert. In most cases, objective measurements are preferred, as they deliver less uncertainty and less noisy or insufficient results that can mislead the assessment and may result in false evaluation values. The problem to be analyzed in a mathematical context will deal with a civil engineering (CE) assessment procedure, specifically with the estimation of the strength of a masonry structure.
In CE practice, various destructive and non-destructive methods are used to test masonry. This includes the whole range of tests from non-contact techniques to procedures with small traces of major damage or total destruction. For existing buildings and those to be retained and renovated, methods that provide sufficiently accurate information for the design and that do not damage the load-bearing capacity of the building have to be used, i.e., those which leave only a small trace. Examples of the masonry varieties of a concrete existing building are shown in
Figure 1, where
Figure 1c (the ground floor masonry) presents a wall segment after taking core samples for subsequent laboratory analysis. A correct study always involves the simultaneous application of several methods, including visual inspection and one or more types of instrumental tests, and the final outcome is obtained by evaluating them together [
1,
2,
3,
4].
These tests may be partly subjective or may contain noise thanks to the environment and the material properties of the tested object. It has been shown in previous works [
5,
6] that it is possible to handle the uncertainties and missing values with a fuzzy signature (FSig). The root
represents the “state of the masonry”, and the leaves of the signature are the input values of observation and measurements. The FSig is not able to model the mutual influence of the individual input values. The leaves represent the influencing factors of the masonry condition, while the internal nodes are assigned fuzzy aggregations combining the fuzzy values on the leaves into descriptor values of more general features [
7,
8,
9,
10,
11]. The chosen aggregations are weighted geometric means (WGMs) as they fit into the logic of the standard formula combining brick and mortar strength [
12]. In the individual nodes, the weights are assigned according to the importance of each input feature.
The point of the investigation was to generalize the model by similar type aggregations. The base for determining the weights was CE domain knowledge. The FSig was built up based on a realistic model. Taking into account the linguistic labels used by the experts, each label was assigned a symmetric fuzzy membership function. The root (the Masonry Strength) was assigned only three linguistic values {loadable, repairable, to demolish}, for which labels are calculated by the definition lookup table of the linguistic root aggregation.
For each masonry component, a separate signature was constructed, as it was proposed in our previous work [
13]. Each signature consists of two subtrees: the
Visual Inspection and the
Instrumental Test subsignatures. The latter consists of several sub-subsignatures, namely the
Brick,
Mortar and
Masonry trees. The leaves are the objective measured parameters, while the visual inspection leaves are obtained by subjective evaluation.
When assessing a complete building consisting of several masonry walls, each of them can be modeled with a separate signature. The CE expert determines which subset of these walls should be selected for assessing the strength of the whole building. After selecting these, a set of FSigs will be available. In order to evaluate the resulting strength, these walls will be combined into a single signature of which each wall FSig will constitute a subsignature. The aggregation in the root in the merged overall signature is critical for determining the overall masonry strength of the building.
The possible outcomes of combined signature systems are defined by five linguistic descriptions: {loadable, usable, partially repairable, repairable, to demolish}. Depending on the state of the separate walls, the collective results will be described by a linguistic inference system [
14,
15,
16,
17]. An example is shown in
Table 1 for five connected walls. Each row of the table corresponds to a linguistic IF-THEN rule, thus defining a purely linguistic aggregation in a similar manner, as it is defined in [
18].
In this paper, a procedure is proposed using FSigs to assess the state of a masonry structure that combines semi-destructive inspection methods with visual inspection. Secondly, this paper proposes a generalized approach for the FSig model applicable for arbitrary assessment. And lastly, validation of the model is presented with existing measured values of masonry [
19].
2. Materials and Methods
2.1. Fuzzy Signatures
Many applications nowadays include rule-based fuzzy decision-making and control, enabling systems to deliver sufficiently accurate responses. Unlike conventional control approaches that rely on strictly defined mathematical models, fuzzy controllers operate with linguistic IF–THEN rules, which makes them both flexible and easy to implement.
The central feature of fuzzy logic is its ability to handle uncertainty by allowing truth values to vary continuously between 0 and 1, rather than being restricted to binary extremes. In practical applications, linguistic variables such as very slow, moderately slow, slow, moderately fast, and very fast are commonly employed. These rule-based systems can be implemented in different computational environments such as MATLAB R2023b/Simulink using the Fuzzy Logic Toolbox, which provides tools for defining membership functions and for obtaining conclusions.
Designing a fuzzy inference system requires careful specification of the membership functions, including their type, number of inputs and outputs, and scaling parameters. In this context, trapezoidal membership functions are frequently selected, which are proper to fuzzify linguistic categories, while it is possible to take care of the proper overlap of the categories. The definition of membership functions is generally left to the discretion of the designer or researcher, ensuring that they match the characteristics of the system under study. The rule base captures the input–output relationships and constitutes the fuzzy control strategy.
FSigs represent a distinctive form of multidimensional data structure with uncertainty, where specific closer connected dimensions are grouped into subsets of variables, forming subsignatures. These subsignatures collectively contribute to the characterization of higher-level features [
20].
Fuzzy logic and fuzzy sets, originally proposed by Lotfi A. Zadeh in 1965 [
21], have since become an essential component of modern control systems and industrial applications. The definition of fuzzy sets is
where
is the universe of discourse,
and
is the membership function defining fuzzy set
.
An essential extension of the fuzzy set, the L-fuzzy set, was given by Gougen [
22]:
where the membership function assigns the elements of
to the universe
.
The
-dimensional vector valued fuzzy sets were defined in [
23].
FSigs are further extensions of vector valued fuzzy sets towards hierarchically structured multi-component membership degrees as follows [
24].
Definition 1. Let be a set of families of aggregation operators. Given a set of fuzzy membership degrees assigned to every leaf in L, with , and the set of aggregation operators assigned to every internal vertex in N, with , the tuple is called the fuzzy signature associated with G, the tuple is called the structure of S and the set is the membership degree set of ([24], Definition 11). In [
24], it was proven that FSigs form L-fuzzy structures under reasonable restrictions, which are in accordance with real-life applications.
2.2. The Assessment Method
To determine the masonry strength, the following formula is recommended by EuroCode 6 [
12]:
where
and
are the strengths of the masonry unit (brick) and mortar, respectively, and factor
represents a constant value for a particular masonry structure. This formula was the motivation to use weighted geometric mean (WGM) as an aggregation operator in the evaluation of the FSig. Using the properties of this operation, the membership function values were set to 1 if the input values were missing or ambiguous from the measurement. The mathematical formula of WGM aggregation at the
th internal vertex
is
where
is the
th leaf of the
th vertex and
is the weight of the argument at leaf
, where
.
The measured values were originally real numbers, which had to be fuzzified. At the leaves, the membership functions (MFs) were defined with cores and the extent of overlap based on experts’ advice. It was proposed to use Ruspini partition, where the sum of the membership degrees belonging to the individual membership functions over every
is 1. Logarithmic normalization (5) was chosen because it fits most appropriately to the WGM aggregations in the signature. With these preparations, it has become possible to convert crisp (real) values into fuzzy values.
where
c is the value to normalize in interval
. These intervals were determined by experts, or, it is possible to find them in professional CE manuals [
25,
26]. An extract of these minimum and maximum values is shown in
Table 2. In this table, the linguistic data types belong to the visual inspection part of assessment; thus, these are not measurable but depend on the expert’s opinion. In this table, it is also shown how many MFs were assigned to the individual input types. These MFs were dividing the domain evenly, and the ratio of core and support intervals was defined as 2:1.
Depending on the crisp value, it could be fired to a unique membership degree if the value is contained by the core of an individual MF; otherwise, the fuzzy value was considered as the sum of the two degrees of belonging to two adjacent MFs (6) [
27] (
Figure 2).
In the signature, when one of the values was not measured or seemed to be unreliable, its membership values were set to 1 (7), and its weight was set to zero. This way the restrictions of the WGM stood fulfilled, meaning that the sum of corresponding weights to the leaves stayed at 1 while the missing leaf’s value did not influence the weights.
This way, all measured values were fuzzified. The internal vertices, as the roots of subtrees, being responsible for expressing the influencing factors of the root, and there are no predefined values in the literature. Thus, the membership function set of these was defined symmetrically with three or five trapezoidal membership functions based on experts’ advice. There is no argumentation for applying either symmetrical or asymmetrical MFs, and thus, simple symmetrical shapes were selected. After executing the aggregations, the resulting MFs lost their piecewise linear characteristics. These new functions determined the respective reference values by the mean of maxima (MOM) (8) [
28,
29]. To keep calculations simple, the resulting functions were re-linearized based on these crisp values by the predefined MFs. The re-linearization was carried out by MOM, meaning that the defuzzified function’s crisp value got assigned to one of the predefined MFs in the root where the aggregation was made. The MOM of the MF in the root was considered as the reference point from which the absolute difference was measured to the crisp value of aggregation.
This procedure was carried out until the internal vertices led directly to the root of the FSig. At the root, the last linearization presents the accuracy of assessment compared to the destructive test results, which were also fuzzified similarly to the input fuzzification. This way, the exact value of the measured output was not the reference point in the difference but its defuzzified value. Obviously, it may have resulted in some inaccuracy, but the main point of the assessment was to decide the state of masonry. In a mathematical sense, this approach did not change the assessment’s conclusion. In
Figure 3, two examples are shown, where the green function’s defuzzified value belongs to the first and second of three categories with different
membership degrees (~0.6200 to the first and ~0.3800 to the second), and the red function belongs only to the second MF with a membership degree of 1.
2.3. Optimization of the Weights in the WGM
The main goal of the investigation was to determine the optimal weights of all aggregations as the exponents of the leaves of subtrees. The objective function was defined in every case as the difference
of the crisp value of the WGM aggregation and the corresponding predefined MF’s defuzzified value, as it is shown in
Figure 4. If the crisp value was an element of the MF core interval, the difference was obvious, whereas if it was an element of overlap interval, it was assigned to the MF which had the higher membership value at this point. In the case, when the crisp value was equal to the middle point of overlap interval, where each membership value was 0.5, the distance was measured from the MOM of the lower MF (worse category, to the left of the intersection). But because of the equal division of domain, at the middle point it was not important which adjacent MF provided the reference point. During the optimization, the dataset provided by Dormány [
19] was used with 114 combinations of parameters of masonries in different states.
The mutual influence of parameters was considered as aggregation in the signature. In this model, the chosen aggregation assumed independent parameters, but later it would be possible to test the case of dependent parameters.
The internal vertices Brick, Mortar, Masonry and Visual Inspection were described by 5 MFs, and the root of an individual signature (Masonry strength) was described by 3. This way, the experimental assessment and the theoretical investigation may be accurate enough to obtain objective and reliable assessing results.
The flowchart of optimization is shown in
Figure 5. It is shown in the flowchart that after constructing the signature based on recommendations from experts, the necessary parameters were measured. If some of the parameters were impossible to work with, they were ignored, and more accurate MFs and exponents were assigned to them, which did not change the results of the actually measured parameters. To carry out the optimization, hyperparameters like the number of iterations were defined. The fuzzification, executing the aggregations and evaluating the results in the root were explained in the
Assessment Method section. The optimization was considered complete if there were no more iterations left and the best result with optimal aggregation exponents and lowest difference in the root were found.
During the optimization, four kinds of FSig structures were examined, depending on the number of masonries. The FSigs assigned to the individual masonries were the same, as it is shown in
Figure 6a, from which the assembled signature was created (e.g., three joint masonries in
Figure 6b). In the first case, only a single masonry structure was tested with separated subtrees. This means that every subtree was considered an individual element, and the input values were generated independently for the subtrees. Each aggregation result and the difference of the corresponding crisp values were determined separately. These differences served as references for further calculations, where the FSig’s vertices (leaves and internal vertices) were analyzed dependently. In the latter cases, the previously described assessment method was used not only for a single masonry but for multiple assembled masonries.
In the multiple assembled masonry cases, the root (Assembled Masonries) was defined by five MFs for a more nuanced approach, and their subtrees were the individual masonry FSigs. In practice, assembled masonries has not been tested yet, so this will be only a theoretical test.
Three different techniques were applied to optimize the weights of WGM aggregations. The first optimization was made by the Monte Carlo method (MCM) using 20,000 iterations to find the best exponents of WGMs at the lowest differences of the evaluated values. This number of iterations was determined after testing a wide spectrum from 1000 to 100,000 iterations per thousands. After 20,000 iterations, the model did not become more accurate, so this number seemed be enough for further tests.
The second optimization method utilized the genetic algorithm (GA) of MATLAB software, applying the built-in “ga” function with a population size of 30, spanning 10 generations, and a sensitivity of . These hyperparameters were tuned to the optimum. Above 10 generations, the optimization stopped every time because the change in difference was lower than the given tolerance. In addition, in most of the cases, this generation number seemed too much, but providing enough iterations was more important than decreasing the time demand. By changing the population size, the same phenomenon came up. In the most cases, lower population size would have been enough, but higher size only made the time demand larger and did not produce more accurate or different results. The restrictions of the algorithm were the same for every subtree: the sum of the weights of WGM aggregations had to be equal to 1 while the exponent values were greater than or equal to 0.2. The minimum of exponents was declared to exclude the abnormalities of neglecting some leaves of the signature. This would contradict the CE experiments. To achieve the best results, every optimization with the GA was run 50 times to find out if multiple minimum points exist, and the combination resulting in the lowest difference was considered the best.
Particle swarm optimization (PSO) was selected as the third optimization method by using the “particleswarm” function of MATLAB software. After tuning on a high spectrum of hyperparameters, the necessary population size (swarm size) was shown to be 30, which was run through 200 iterations with a sensitivity of . This combination of hyperparameters provided high reliability in finding the minimum points of the objective function, while the optimization required less time than that for a greater size of swarm and iterations. To find multiple possible minimum points, PSO was run 50 times for every signature concept.
2.4. Pseudo Code
The optimization algorithms used the same objective function, which was defined by 17 variables. The objective function described the mutual influence of input parameters by using a proper aggregation operator in the internal vertices. This function had the input as the crisp values of measured parameters. These were possible to measure between a minimum and a maximum value, shown in
Table 2. The output was defined as a set of exponents used in the aggregations and the difference for every root of subtrees and for the root of the complete signature. The pseudocode for the objective function can be found in Algorithm 1.
| Algorithm 1 Pseudocode of the algorithm used |
| Objective function: Calculates the absolute difference between reference points and defuzzified membership function values in the vertices of the modelled fuzzy signature. |
|
|
|
|
|
|
| Normalization and fuzzification |
|
|
|
|
|
| END FOR |
| Aggregate by levels of the signature |
|
|
|
|
| FOR k elements of connected vertices from previous level |
|
|
| END FOR |
| END FOR |
| Defuzzification and comparison |
|
| FOR Subtrees in the Signature |
|
|
|
|
| FOR FS elements of FuzzySet |
|
|
| END FOR |
| END FOR |
|
The optimization algorithms were used to find the minimum absolute differences of roots of subtrees and the root of the signature. As output, the exponents corresponding to the minima were also included.
3. Results
In this section, the results of optimization are presented. The optimal weights of the terms of aggregations are summarized in
Table 3,
Table 4,
Table 5 and
Table 6. The time demands of signatures are shown in
Figure 7.
In
Table 3, the weights resulting from the optimizations are shown for a single masonry structure with separated subtrees, as it was detailed in the previous section. In further investigations, this method would not be useful because the values of internal vertices depend on the values of leaves, which are the only measurable parts of the signature. The only point of this test was to see how sensitive the subtrees are in terms of the number of MFs at the leaves and at the root. This model was also used to test the number of necessary samples of the MCM and to find the proper hyperparameters of the GA and PSO.
It is shown that for the subtrees of Brick, Visual inspection and Masonry strength, MCM results provide lower differences, while the time demand is also lower, although for both methods. The MCM required 3500 s, GA 4500 s and PSO 3442 s. The only advantage of the GA in this case is that the exponents are more balanced, ensuring that the values meet the expectations of CE experts.
In
Table 4, the results of the single masonry case are shown with a complete test, with dependent vertices. In this case, the samples were generated at the leaves only, and the optimization was run through the whole signature. Compared to the previous independent case, this is more reliable if we consider the real measurements, where, despite the practical instructions, mistakes are possible to make in the readings and in the assessment. Furthermore, in the subjective parameters like the visual inspection, it is observable that the PSO method produced the worst results for internal vertices. Despite this fact, at the root, optimization by the MCM resulted in a 0.2350 difference, which may indicate a high uncertainty. On the contrary, the GA and PSO performed with near-zero difference; thus, these may assume a much better combination of weights.
Considering the exponents generated by the GA method shows that the weighting is not balanced. In most of the subtrees, the weight was as low as it was allowed, suggesting that some of the nodes are not important. The optimization was also tested with lower bounds, and this unbalanced weighting occurred even more spectacularly, while the difference did not decrease. This may forecast that the signature may be simplified into a one-branch tree or, in the most drastic case, into one node, if the engineering problem was ignored.
According to the exponents resulting from PSO, it seems like a mixture of the MCM and the GA, because the weights are balanced while the difference at the root was as low as the GA’s difference.
In
Table 5, the optimized weights of WGM aggregations are shown for two assembled masonries. The MCM results are more balanced, like before, and the aggregations at the internal vertices are more accurate. Despite the GA method result at the root being closer to a minimum value of the objective function, the exponents resulting from the GA are expressly unbalanced, similar to the case of single masonry one-sided weighting. In the table’s GA column, the same weights occur at the
Brick and
Mortar nodes, but in a different cycle, the same weights occur in a different order inside a subtree. It shows that the optimization method does not take care of the original problem; the only task is to find the best settings of the signature. This problem occurs not only with the GA method but also with PSO. In the second sub-signature, according to the second masonry type, similar weighting is observable, like in the GA’s case. In addition, the assumption is correct for meta-heuristic algorithms, that similarities in the weights will occur when multiple masonries are tested together. This similarity is not true for the MCM, where the random generation ensures better balance for the exponents but less reliability at the root.
At the end, the new root, assembling the separate signatures, shows the common state of the masonries, where the GA shows the lowest difference, and the MCM, with the highest difference, makes it seem like the training was unsuitable to optimize the model.
In
Table 6, the case of three assembled masonries is shown. The same importance was assumed for the three masonries as they had the same importance in the structure of the existing building. Compared to the previous table, the MCM shows the highest differences with balanced weights for most of the vertices, indicating that this method may not be able to handle a large structure like this, or more samples should be used. Meanwhile, the
Exponents with GA column shows similarities to the previous tables, indicating a low sensitivity to increase in size of the existing FSig. The difference at the root is the same as previously,
, which forecasts the minimum value of larger signatures and shows the stability of the model.
The PSO method seems to be an intermediate method, because some aggregations kept the balanced weighting like the MCM and some of them followed the GA’s unbalanced weighting. The differences became similarly low to GA differences. It is also true for the root, where the difference is closer to the GA than the MCM.
Table 7,
Table 8 and
Table 9 present the confusion matrices by using the exponents of three kinds of optimization on the dataset provided by Dormány [
19]. The results were validated on the single masonry signature, because the measurements were carried out only in that case. The assembled masonry cases are not possible to validate yet. Based on the classification mentioned in the introduction, three linguistic categories were used in the root:
loadable,
repairable and
to demolish. In the confusion matrices, the first column presents the results of destructive tests as references, and the first row contains the assessments results. The validation results are shown as percentages per categories.
Each confusion matrix shows that the validation did not work for the worst category. This may signal that the measurement was carried out only on a narrow spectrum of possible states of masonries. These values were probably too similar; thus, the training tended towards the direction of better states. It is also a common attribute of the tables that most of the datasets belonged to the repairable category; thus, the assessment resulted in the best reliability in this case. Described with numbers, 96 cases fell into the repairable category based on the destructive test, while the category to demolish included 11 and loadable had 7 samples.
Based on the confusion matrices, the model is unable to handle the lower parts of the domain. There were not enough samples to achieve reliable weighting for the to-demolish category. The greatest number of samples belonged to the repairable category; thus, the assessment showed high reliability to obtain the proper category as the output for each method. The loadable category was also an extreme case, which resulted from less samples, similarly to the other extreme category. The matrices show that the GA is unable to handle the samples from this region. It may occur because of the unbalanced weighting. To arrive at the proper decision, the MCM seems the best for this third category, but it also results in a very low success rate.
Summarized by the number of success predictions, the resulting model of the GA is the best despite its unbalanced and monotonous weighting. But if the CE problem and the reality is also an aspect of investigation, the results of PSO would be better. With more balanced weighting, it was able to handle at least the loadable samples also.
Figure 7 shows the time demands of optimizations depending on the number of connected signatures. The tests were run on 10 threads parallelly. The figure shows an increasing difference between the time demands of the different methods. The optimizations were tested on five assembled signatures at most, but the last two are not presented in this paper for practical reasons. The slopes of regression lines show that MCM time demands increase more than one and a half times faster than the GA or PSO demands. It is also shown that while the GA’s and PSO’s time demand increases approximately linearly depending on the number of connected signatures, this provides predictability for further tests. For larger models, the regressions show that PSO would need less time. The MCM time demand is far from a linear increase: it shows a sinusoidal increment.
4. Discussion, Evaluation of Obtained Results
The validation dataset contains 114 individual sets of parameters measured on similar masonries and the results of destructive testing, which enabled reliable evaluation of a given masonry structure. Validation was performed using 20% of these values, while the other
was used for training. The reliability of the model was determined by the ratio of the number of available sets of data to the number of cases showing the same results as the destructive testing. After testing the results of optimizations 100 times to make sure that a large number of different combinations of parameters was used for training and validation, the mean of this ratio was 79.53% for the exponents provided by the MCM, 83.05% for the GA and
for PSO exponents from
Table 4. These results may be considered reliable and applicable in a small group of masonries. In practice, more measurement would be necessary to obtain a wider region of datasets to train the model. The confusion matrices showed that the assessment is highly reliable when the masonry is in repairable state but uncertain when the assessment should result in low- or high-quality masonry.
The aim of this research was to assess the state of existing masonry using FSig and to optimize the weights of the WGM aggregations. The results showed that the signature is slightly sensitive to the granulation of MFs at the leaves. In contrary, the number of MFs at the root determines the possible smallest deviation values that correspond to the minimum values of the objective function defined by the distance between reference points. The engineering experience dictated to use three MFs at the root of an FSig of a single masonry. In turn the previous tests did not cover the descriptions of assembled masonries; thus, the number of MFs could be freely selected to be five to achieve more nuanced linguistic results.
The asymmetry in the aggregations allowed for the encoding of domain-specific preferences for different leaves that can provide a stronger influence on the outcome. Although the results showed that the exponents determined by the GA provide lower differences at the roots of overall tested FSigs, the weighting turned asymmetric at the internal vertices. Generally, it would show that some of the assumed vertices are unnecessary, and by leaving them out, the model could be simplified. However, the CE problem does not allow us to ignore any of the aspects of observation; thus, the signature was not simplified. The degeneracy through simplifying the model may cause that only the root remains, which is not possible to measure directly with non-destructive methods. Moreover, the practice offers multiple methods for assessing the state of existing masonry, so it may be possible that instead of ignoring vertices, more aspects of testing methods should be included. Another alternative may be to change the structure of the signature or to consider not evenly weighted leaves. In the future, using other restrictions may ensure the model is more balanced. In this regard, the exponents defined by the MCM or PSO would fit better with the expectations. According to the success rate of optimized weights, each of the setups may be similarly useful in practice.
Comparing the results to those in paper [
13], where the same structure of signature was used for assessing the strength of masonry, the new approach described in this actual paper seems more accurate. In the mentioned study, the objective function measured the difference at the root between crisp destructive test results and defuzzified values of the assessment. One of the used optimization methods there was also the genetic algorithm, which provided lower differences than the other tested algorithms. However, the new approach compared the MFs, which provided a more similar interpretation of destructive test results and assessment by fuzzy signature. The differences after optimizing the exponents became lower in this approach. The previous study showed a
difference between the assessment result and destructive test; in contrast, the new approach comparing MFs showed a 0.0010 difference between reference and assessment values. The success rate of assessment is similar to the difference because the result of the previous investigation provided nearly
proper predictions, while this new approach produced approximately
for each optimized version.
The FSig created on an experimental basis became generalized to larger structures that consist of multiple similar types of objects. This kind of extension simulates parts of an existing building by considering each wall as an individual sub-signature that is aggregated into one root expressing mutual influence. These tests showed that the combined effect of the elements can be characterized by a root that merges them all. With this observation, the importance of the individual elements may be described in the merged signature. The assembled masonry cases also showed that the objective function has near-zero minimum values that would not be changed by increasing the size of the signature. During measurement, it would be possible to observe cross-correlations between measured parameters not only inside one masonry structure but also between adjacent masonry structures. These correlations may not be considered in the signature, because an FS is not able to handle correlations between parameters in the same level, although the correlations are handled similarly as in single masonry via aggregations.