1. Introduction
Research on data and methods of their analysis using fuzzy mathematics has now taken shape as an independent direction, which includes methods of fuzzy regression and the analysis of fuzzy time series [
1,
2,
3,
4,
5,
6,
7]. We can highlight the main stages of development of this direction.
In the initial stage, studies of the fuzzy regression model were carried out. The second stage was the development of soft-computing methods, within which a huge number of studies have been carried out on the effectiveness of soft computing for time series analysis. The third stage consisted in the transition from the analysis of time series using fuzzy mathematics methods to the analysis of fuzzy time series. The development of fuzzy database methods has made it possible to move to the stage of extracting rules from fuzzy (granular) time series.
Within each of the listed stages, a significant part consists of methods for identifying trends and, more broadly, a morphological analysis of time series. The proposed work should be attributed to the use of fuzzy mathematics methods for the analysis of discrete time series.
1.1. Trends and Fuzzy Principles for Their Modeling
Trends in a time series are its fundamental characteristic and therefore can tell a lot about the nature of the process behind it. The identification of trends is a significant part of what is traditionally considered to be the morphological analysis of time series [
8,
9,
10,
11], including:
The decomposition of the time series into trend and seasonal components, as well as the remainder: the trend shows the general direction of changes over time, seasonality reflects repeating patterns associated with certain periods of time, and the remainder reflects random fluctuations within the time series;
An autocorrelation analysis, which helps identify periodic fluctuations associated with seasonality;
A spectral analysis, which allows one to analyze the cyclicity in a time series and the most important time periods for it.
Currently, a broader understanding of morphological analysis as the study of the manifestation of one or another geometric property in a graphical representation of the dynamics of a time series is gaining momentum [
12]. A morphological analysis of time series is useful for a better understanding of their dynamics and more accurate forecasting.
There are several methods for constructing and identifying time series trends. Here are the main ones [
8,
11,
13,
14,
15,
16,
17,
18]: smoothing with a kernel (in particular, the moving average method, exponential smoothing), regression and autoregressive (AR) methods, wavelet analysis, nonlinear methods (in particular, machine learning and neural networks).
Real trends are stochastic and are not at all similar to ideal mathematical ones, since they have glitches. This does not confuse the researcher, who perceives the trend adaptively and understands when a violation is insignificant and the trend continues, and when a violation interrupts the trend.
Thus, if mathematical trends are strict and unambiguous segments in each subsequent node for which the value of the record is greater than, or equal to (less than, or equal to) the value of the record in the present node, then stochastic ones depend on the point of view of the researcher and therefore can differ.
Let us call the formalization and search for trends and extrema in a function the trend problem. Its solution, according to the authors, consists of a sequence of answers to the following questions:
What is the trend of a function at a point?
Which parts of the function should be considered definitely trendy?
How do these fragments form a general trend?
What is an extremum of a function?
The solution to the trend problem, according to the authors, should be fuzzy, multiparameter and multiscale in the spirit of wavelets and fractals. By changing the parameters and scale, the researcher gets a complete picture of the trends and selects the ones they need. In addition, a multiscale trend analysis is very useful, objective and can tell a lot about the function as a whole.
The above is fully consistent with the principles of fuzzy modeling, on the basis of which it is supposed to approach stochastic trends. In this regard, we quote Zadeh [
19]: “All we need to solve most practical problems is a parameterized family of definitions that, if necessary, would allow a non-standard choice of operators that reflect the characteristic features of a particular application. The advantage of this approach is that by avoiding fixed, concrete-independent definitions, fuzzy set theory and fuzzy logic achieve a pluralism that increases their flexibility and expressive capabilities”.
In this work, such operators will be regression differentiation, regression smoothing, fuzzy trend measure and fuzzy extremum measure.
It should be noted that regression derivatives were used earlier, in a simpler form than in this work, for the classification of time series, which made it possible to determine groups of series similar in morphology using various similarity measures [
20,
21,
22,
23,
24]. In such problems, the choice of similarity measure affects the classification accuracy to a greater extent than the choice of classification method.
The advantage of similarity measures constructed using regression derivatives is the ability to take into account both positive dependencies, when time series simultaneously increase or decrease values, and negative dependencies, when the values of one time series decrease and another increase, and vice versa [
23]. Similar results based on the fuzzy correlation measure constructed by the authors are given in the conclusion.
1.2. Solution of the Problem of Trends and on the Basis of Discrete Mathematical Analysis
The problem of trends (see
Section 1.1) in this work is solved within the framework of discrete mathematical analysis (DMA)—a new approach to data analysis, researcher-oriented and occupying an intermediate position between hard mathematical methods and soft intellectual ones [
25,
26,
27,
28,
29].
The solution to the problem within the framework of DMA consists of two parts. The first is informal: it explains the researcher’s logic, introduces the necessary concepts, and explains the scheme and principles of the solution. The second is of a formal nature: with the help of the DMA apparatus, all concepts receive strict definitions within the framework of fuzzy mathematics and fuzzy logic, and the scheme and principles become algorithms.
We call the first, informal part of solving the trend problem within the framework of DMA the logic of the researcher’s trends (RTL) and formulate it in the form of the following provisions:
There is a record f on a finite uniform set of nodes T. At each node, the researcher vaguely but unambiguously sees a positive, negative or neutral trend f.
The researcher considers positive (negative) trends for f to be segments in T consisting of positive and neutral (negative and neutral) nodes from T.
Opposite trends intersect at neutral nodes, among which the researcher can choose an extremum for f.
The further, main part of the work is devoted to the transformation of RTL into algorithms (the second part of solving the problem of trends within the framework of DMA): fuzzy measures of the trend and extremum are constructed, expressing the researcher’s opinion about the presence of a trend and extremum in a record in a particular node. The combined use of these measures makes it possible in a discrete situation to repeat the classical results of mathematical analysis regarding trends and extrema for piecewise smooth functions.
The measures are based on discrete regression derivatives. Their definition, study and rationale for use are given below. Having a discrete derivative, there is a natural desire to repeat on its basis, in a discrete situation, the scheme of the approach of classical mathematical analysis to trends and extremes. This determines both the motivation and goals of this work.
1.3. Regression Approach to Derivatives (Continuous Case)
Let the function f be integrable on an interval I containing zero internally. Then, for a sufficiently small , the segment is contained in I. Let us denote by the restriction of f to the segment : and calculate the projection of the function in space into the two-dimensional subspace of linear functions .
Statement 1. If a function f has a tangent at zero, then, as , the linear projection tends to it.
Proof. Let
,
be an orthonormal basis in
, obtained from the natural basis
by a Gram–Schmidt orthogonalization [
30], then:
Let us put
,
. Three conditions arise on
a,
b and
c:
Additionally, the function
f is differentiable at zero:
where
when
.
The limit
in the free term of the projection
is explained by the mean value theorem [
31].
Let us analyze the expansion coefficient
at
x:
The last integral tends to zero as
:
□
1.4. Regression Approach to Derivatives (Discrete Case)
We postpone the consequences of the proven statement and its further development in the continuous case until the
Appendix A, and now we discuss its significance mainly for the analysis of data in a discrete situation.
Replacing the tangent to f with the projection for small makes it possible to determine the tangent for discrete functions, since the projection is nothing more than a linear regression for f on and can be generalized to the discrete case.
The limit transition in the discrete case is replaced by a fuzzy weight structure of proximity to node t in a finite set of nodes T, the domain of definition of the function f.
The proven statement gives grounds to consider the linear regression of the function f with respect to the weight structure on T as a tangent for f at t, and its slope as the derivative of f at t.
Having a derivative for f, there is a natural desire to repeat on its basis in a discrete situation the classical approach to trends and extrema from mathematical analysis.
2. Discrete Regression Derivatives
Statement 1 proved above allows us to conclude that for a function f that is differentiable at zero, its linear continuous regressions on the segments tend to the tangent as .
This approach to differentiation in the continuous case allows a continuation to the discrete case, since discrete regressions are just as efficient and fundamental as continuous ones.
Let be a finite discrete segment with equal nodes , , .
Let us call a segment in T a piece in T without gaps: for some . In addition, we call the beginning (end) and denote by () the first and last nodes and , respectively.
We consider any real function on T to be a time series (record) f; is the space of such functions.
The analysis by a researcher of the behavior of a time series involves considering its values not only in a separate node but also simultaneously taking into account the values in some of its vicinity. This is precisely why the segment
T needs to be localized at each of its nodes. It can be implemented using the fuzzy structure
on
T, which plays the role of a neighborhood of node
t and expresses the proximity to it of individual nodes
normalized in
t:
is a measure of theproximity of
to
t.
We consider the proximity measure on T to be a set of fuzzy structures : , .
The measure is the only parameter in the theory of trends and extrema constructed below and is therefore very important. Its choice is entirely determined by the researcher. The authors’ choice is the family .
Definition 1. , p—scale parameter, r—viewing radius (Figure 1).
The family expresses the authors’ point of view on localization: a researcher analyzing a record f at node t first selects the boundary of the view (parameter r) and then its thoroughness (scale, parameter p). The required localization can be achieved using the family in two ways: either by the parameter r tending to zero, or by the parameter p tending to infinity. In this paper, the authors chose the second path: in the measure , there is an interesting dependence on the scale parameter p, which allows you to “look at the record from a different height”.
The parameters
p and
r are chosen by the researcher. In this work, the measure
is used for trend analysis, which can be simple (
p and
r are fixed) and multiscale (
p changes,
r is fixed). The work focuses on multiscale analysis. For its objectivity and completeness, the radius
r is assumed to be equal to a quarter of the length of the segment
T.
Figure 1 shows the dependence of the proximity
to node
t on
p for
r equal to a quarter of the length of the segment
T.
The limit transition
to
T performs a proximity measure
by distributing weights on
T:
. With that said, we should consider a linear regression based on the fuzzy image
at the beginning of the tangent
to the function
f at node
t. Associated with the image
is the functional
The values
of the parameters of the tangent
are the minimum point of
. Therefore,
and
satisfy the system of equations
To build trends, the formulas in (
3) are used. A simpler expression for
and
is used in
Appendix A.1.
Definition 2. The slope coefficient is called the regression derivative of f at t and is denoted by . The function is called the regression derivative of f and is denoted by . The functional correspondence is a linear operator on , called regression differentiation and denoted by .
Definition 3. The value of the regression tangent of the function f at t is called the regression value of f at t and is denoted . The function is called regression smoothing of f and is denoted by . The functional correspondence is a linear operator on , called regression smoothing and denoted by .
A special notation for differentiation and smoothing in the case of a measure
is:
The theoretical justification for the regression approach to differentiation presented in this work finds additional empirical confirmation in the form of the good performance of regression smoothing: with the same review (parameter
p) on smooth functions, regression smoothing works better than conventional averaging. In
Figure 2, regression smoothing is shown with a solid line, and conventional averaging is shown with a dotted line. The visual comparison is supported by the quadratic discrepancy with the ideal. The advantage of regression smoothing over conventional smoothing is especially visible at the ends of both the synthetic smooth recording (
Figure 2) and the real one (
Figure 3). Until the end of this paper, these records participate in the game and serve as a testing ground for the trends and extremes proposed in this work.
Figure 3 shows the performance of the regression smoothing
on the real magnetic storm record in the same
p-scale parameters as in
Figure 2 for the synthetic one. The above figures confirm the convergence proved in
Appendix A.1 to the record
f of its regression smoothing
at
.
3. Trend Measure: Preliminary Solution to the Trend Problem
The assumption that a researcher looking at a record f can determine its trend at any node is central to the researcher’s trend logic. Based on it, we construct its implementation using a fuzzy trend measure.
The researcher’s view of the record
f is formalized by its regression smoothing
based on the proximity (localization) measure
on
T chosen by the researcher. Next, the researcher is not interested in the smoothing
itself, but in the result
of its differentiation by the operator
:
(
4). The value of
is called the elementary dynamics of the entry
f at node
t based on the localization of
. Their totality, that is, the image
, serves as the basis for constructing a fuzzy trend measure
. The value
in the fuzzy scale
expresses the degree of confidence of the researcher (the measure of their reason) to consider the trend of the record
f at node
t to be positive.
It is constructed as follows: the researcher gives the weight to the elementary dynamics at node . The argument for a positive trend f at node t is all positive dynamics , and against, all negative dynamics with their weights.
The measure of trend
is considered the ratio of the sum of the weights of positive dynamics (the argument “for” the positive trend
f at node
t) to the total sum of weights:
If , then the total argument of the weights of increasing dynamics is greater than the total argument of the weights of decreasing dynamics; therefore, the researcher considers node t to be positive according to the trend for f, and the degree of conditionality of its solution is .
Similarly, if , then node t is considered negative according to the trend for f with a base of and neutral in the case of equality .
Let us summarize the intermediate result: based on the measure , the answer to the first question formulated in the introduction was obtained: “What is a trend at a point?”.
Next, partitioning
into positive, negative, and trend-neutral nodes
allows one to simultaneously answer the following two questions of the trend problem: “Which fragments of the record should be considered unconditionally trendy?” and “How do these add up to overall final trends?”
The fact is that in real conditions, there are very few neutral trends from , or none at all. Therefore, it seems natural to consider segments of the record f entirely consisting of positive and neutral (negative and neutral) nodes, respectively, as positive and negative trends () for f: (), a set of nodes without gaps in ().
Definition 4. We denote an arbitrary trend by : . Trends replace each other and can intersect only at neutral nodes, forming an almost disjunct covering T, which we denote as .
We call the partition a preliminary solution to the trend problem for recording f based on the proximity measure . An explanation of its preliminary nature is given below, but now, we note that strongly depending on , in the case , turns out to be very effective and gives good results at different scales p on difficult real recordings with, in our opinion, a large radius review r. It was this circumstance that served as the reason for writing this work.
The proof is presented in the form of a complete display of the solution to trends
: record
smoothing
trend measure
with a partition
applied to it → partition
on smoothing
partitioning
into records
f. The obvious presence of scale
p requires additional effort. Continuing (
4) for
and omitting the viewing radius
r, we introduce the following notation:
smoothing ,
elementary dynamics ,
trend measure ,
partition .
In order not to confuse the trend measure with the trend segments obtained on its basis, in the latter, we agree to indicate the dependence on the scale p in the form of an argument:
,
.
The effectiveness of working in difficult real-world conditions is the main criterion in data analysis, a largely empirical discipline. According to the authors, success in the problem of trends based on the measure lies in two reasons: stability and adequacy.
Stability is a general property of the construction of the measure
.
Figure 9 illustrates this;
Figure 9b,c shows the trend solution on a scale
for a smooth record and its disturbance, indicated in
Figure 9a in black and green, respectively.
Adequacy: Trends
obtained on the basis of the measure
are consistent with the “
p” scale: there are no small dynamics in modulus
p on smoothing
among them. As noted above, it was precisely this circumstance that served as the reason for this work. The explanation of adequacy at the moment is semiempirical: according to the apologetics of regression differential calculus given at the beginning of the work and
Appendix A.1, regression derivatives and values inherit the fundamental properties of linear regression, and the measure of trend very naturally depends on them. Therefore, if the effect for trends through regression derivatives exists, then it must necessarily manifest itself through the trend measure. This is illustrated in
Figure 10, whose detailed story is given below.
The stability and adequacy of the solution to the trend problem made it possible to answer the second and third questions relatively simply, i.e., construct final (currently) versions of trend sections of record f at scale p.
This does not always happen. The traditional solution to the trend problem based on smoothing, for example, polynomial, uses a standard mathematical understanding of trends: trends in a record are considered to be mathematical trends in its smoothing. In this solution, the problem of small dynamics remains: on the one hand, smoothing must sufficiently scan the record, on the other hand, the stochastic nature of the record leads to the appearance of small dynamics in the smoothing (short segments of increase/decrease), which a mathematical understanding of the trend in smoothing will highlight as separate trends on the recording.
Let us turn to
Figure 10: the classic solution to trends for recording
f based on smoothing
is shown in
Figure 10b, and the solution currently proposed by the authors is in
Figure 10c. Selected fragment in
Figure 10d,e illustrates the above and shows a greater stability of the
solution compared to the classical one. The solution
is also better in comparison with the previous solution of the authors, where the trend was obtained in several stages and for this, it was necessary to solve the difficult problem of combining fragments of the
f record into a single trend.
However, the solution
, despite all the advantages mentioned above, has some inaccuracy that does not allow it to be considered the final solution to the trend problem (
Figure 11). To do this, we need a measure of extremity that eliminates the inaccuracy in the solution
and adds stability and adequacy to it.
4. Extremum Measure: The Final Solution to the Trend Problem
In the trend problem, there is one last question about extrema. Of course, the first answer to this question is similar to the classical one: extrema are the boundaries between opposite trends in . On this path, the problem of their existence arises: as noted above, there are few or no neutral nodes from (namely, the extrema should lie within them) due to the stochasticity of f and discreteness of T. The second option, the most natural of the nonempty ones, is as follows: if the positive trend is replaced by a negative , then the maximum should be considered the choice from the end and the beginning , where the entry f is maximum, and, conversely, if the negative trend is replaced by a positive , then the minimum should be considered the choice from the end and the beginning , where the entry f is minimal.
But even after this, some problems remain: the global nature of the trend measure makes the partition stable and quite satisfactory (at least in the case ) on the one hand, and on the other hand, it entails some inaccuracy.
We construct a fuzzy extremum measure , similar to the trend measure : the value in the fuzzy scale of the segment expresses the degree of confidence of the researcher (the measure of their basis) to consider node t the maximum for the function f. Together, the measures and solve the problem of trends: they finally determine the trends and extrema of the record f.
The construction of the measure
begins in the same way as the measure
: the researcher gives the elementary dynamics
at node
the weight
. If node
lies to the left of
t (
), then the weight
speaks in favor of a maximum at
t for
f with
(climbing an imaginary mountain with a peak at
t), and against, all
with
. To the right of
t (
), everything is the other way around: the weights
with
(descent from an imaginary mountain with a top at
t), and against, all
with
. The measure of the extremum
is considered the sum of the pros to the total sum of weights:
By analogy with the partition
, we introduce and denote by
the partition by alternating segments
obtained by switching
:
,
(
Figure 12,
Figure 13,
Figure 14,
Figure 15 and
Figure 16).
denotes the segment of this partition containing node
t.
The scheme for displaying the partition
is exactly the same as for the partition
: record
smoothing
extremum measure
with the partition
applied to it → partition
on smoothing
partition
on records
f. Taking into account the notations
and
, in
Figure 12,
Figure 13 and
Figure 14, the full scenario
is shown for a smooth function on three scales
, and in
Figure 15 and
Figure 16, for a real recording on a scale
.
Let be the version of the maximum obtained above based on . Let us say that it allows a correction if , and the correction itself consists in the transition of to the nearest maximum of the measure on the segment . Similarly, if is a version of the minimum obtained above on the basis of , then it allows a correction if , and the correction itself consists in the transition of to the nearest minimum of the measure on the segment . Extrema based on the measure that do not allow corrections are preserved. This can happen in two situations.
First, the extremum e is already in the correct position ↔ no correction is needed (it is zero); this happens often, for example, for , and confirms the high efficiency of the measure , as well as solving the problem of trends on its basis.
Second, the extremum e is not consistent with the measure : or . This means that the measure at the extremum e shows the opposite of its essence: the maximum seems to the researcher to lie in the lowlands, and the minimum on the hills.
Let us look at this in more detail, assuming that the maximum
is the extremum. Let
,
be the arguments for (against) the maximum of
f in
to the left of it; in notation (
5) and (
6),
Similarly, we define arguments
,
for (against) the maximum of
f in
to the right of it:
In
, there is an equilibrium
It allows us to conclude that one-sided extremalities are equivalent for : is the left maximum for —the maximum on the right for f.
Further, it follows that .
Hence, if the maximum does not allow any correction due to an inconsistency with the measure of extremity (), then is not a maximum on any side. It is probably possible to construct an artificial example of this situation; however, the authors have never encountered this on real recordings. They are calm about the possible appearance of this kind of extrema, since they consider them unstable and, with increasing scale p, either disappearing or turning into normal extrema.
Third, the extremum e can be consistent with the extremum measure but not unique on the segment . In this case, its trace will necessarily be an extremum that does not allow any correction for the second reason.
Let us summarize: the extremes obtained after correction are considered final, and the segments between them are considered the final trends of the
f record. Let us retain their previous designations
e,
,
, noting that after correction, they are the result of the joint activity of the measures
and
(
Figure 17).
The correction of extrema for a smooth recording is shown in
Figure 17, and for a real recording, in
Figure 18, according to the scheme: recording
smoothing
trend measure
with preliminary extrema in strokes → extrema measure
with preliminary extrema in strokes and their continuous correction → final solution to the trend problem on smoothing
preliminary solution to the trend problem for comparison on smoothing
final solution to the trend problem on record
f.
5. Various Scales
As mentioned above, there are two dynamic scenarios for tending to node
t from the position of the family
: the first is
for a fixed
, the second is
for a fixed
. In this article, the authors chose the second path, considering that the behavior of
,
for a large radius
gives a more objective dynamic picture of localization at
t, since a large number of nodes
take a nontrivial part in it
(see Definition 1 and the text after
Figure 1).
The stability and adequacy of the solution to the problem of trends
, the convergence of smoothings
to
f as
, established in
Appendix A.1, give reason to believe that a simultaneous analysis of partitions
, measures
and
for different
p’s can be useful and allow us to gain knowledge about
f at a new level.
The scale parameter p is assumed to be from some discrete uniform segment ; , . The initial scale is usually equal to zero, and the final scale plays the role of infinity . The choice of P is up to the researcher.
The parametric families and , like the wavelet spectrum, characterize the trendiness and extremity of f on a two-dimensional grid at different nodes and scales. Let us use them to determine the hierarchy of extrema on f. The very ability to see the hierarchy of extremes suggests a different scale of the researcher’s view of the record. First, one looks at the recording from the greatest height ↔ at the largest scale. Then, it gradually descends lower, making the viewing scale smaller. Along this path, extrema appear, forming chains. The latter express the migration dependence of the extremum on the scale and generate a hierarchy of extrema: the earlier the chains appear, the more significant the corresponding extremum for the record f.
What was said above according to the scheme “record
different-scale partitioning
migration of extrema to
hierarchy of extrema on record
f″ is illustrated for a noisy smooth record in
Figure 19, and for a real recording in
Figure 20.
Definition 5. Let be a segment in the final solution of trends at level , which contains the extremum . Let us call the migration : the same oriented end of the segment .
The maximal chains are migration scenarios of the extremum on the grid for record f. For any extremum , let denote the chain passing through it. Note that the extremum can be internal in it: .
Definition 6. The weight of extremum e is the exponent of the chain containing it.
Next, we take the last level of the scale and all its extrema for f: . Let us order by weights: ; thus, the most fundamental for f is the extremum with the minimum weight.
The identification of trends using trend and extremum measures is stable, and therefore, a multiscale analysis based on these measures is stable and informative. The algorithm for migrating extrema (constructing their chains) proposed in this work is effective only if the quality of their determination is high. The classical approach to trends based on smoothing, for example, polynomial, and using a standard mathematical understanding of trends, is unstable and is not suitable for such an algorithm: a continuation of a really important extremum at one scale level can become a weak (unreasonable) extremum at the next level, which will lead to a migration (chain) of extrema in the wrong direction. As confirmation of what was said earlier,
Figure 21 and
Figure 22 present a different-scale solution of trends based on a strict mathematical relationship to them for the same records
f and on the same scales
p as the solutions
in
Figure 19 and
Figure 20. Omitting the details of their comparison, let us pay attention only to the narrow red wedge in
Figure 21 slightly to the right of
. It is associated with the appearance of unreasonable highs of high rank, while in fact, there should be only one significant minimum, and it is this one that is shown in
Figure 19d, and the corresponding chain of migrations is shown in yellow in
Figure 19c.
Note that replacing by and by leads to another dynamic implementation of the above scenario with partitioning by measures and .
6. Trends and Fuzzy Logic
The measures and make it possible to use fuzzy logic in a further study of the record f. The authors plan this in the future, and in this work, we provide two announcements of our research.
In addition to the measures
and
, we take into consideration their fuzzy negations
and
. According to (
5) and (
6), the measures
and
are responsible for the increase and maximum of
f; therefore, their negations
and
are responsible for the decrease and minimum of
f, respectively. Let us denote their fuzzy disjunction by
:
We display the manifestation of the measure
on the record
f in a color scale (
Figure 23):
Cyan ↔ manifestation through an increase: ;
Violet ↔ manifestation through a decrease: ;
Red ↔ manifestation through a maximum: ;
Blue ↔ manifestation through minimality: .
Such an encoding of the record by the measure , together with the final solution to the problem of trends for f in the form of a partition , allows us to move further in understanding the behavior of the record through trends.
A few first observations: to be specific, the trend is
. In the regular case, the increasing trend
is a sequential alternation of blue, green and red sections (minimality, growth and maximum). Similarly, a decreasing trend
will be an alternation of red, lilac and blue sections (maximum, decrease and minimum). The relationships between the parts indicate both the nature of the extrema (trend boundaries) and the trend itself: the relatively larger the central part, the more singular the extrema, and the more pronounced the trend (
Figure 24,
, increasing trend containing node 3000 and decreasing trend containing node 3500).
In addition, red or blue inclusions may appear in the central phase: they are outliers in the
trend and indicate its stochastic nature (
Figure 25,
, increasing trend containing node 3000).
Considering Boolean logic to be part of fuzzy logic, we present a second direction of further research related to it. It concerns the dynamic correlation of records
f and
g on
T in the form of a fuzzy measure
. It is constructed similarly to the measures
and
: the researcher selects a node
t and a point of view
on
T, then each joint elementary dynamics
is assigned weight
. The argument for the correlation of
f and
g at
t are all equally oriented elementary dynamics,
, and against, oppositely oriented elementary dynamics,
, with its weights. The correlation measure
is considered the ratio of the sums of weights “for” to the total sum of weights
Fuzzy negation
is a measure of anticorrelation (multidirectionality) of records
f and
g. The correlation of functions
f (
Figure 26a) and
g (
Figure 26b) for proximity
on three scales
is shown in
Figure 27,
Figure 28 and
Figure 29: the areas where
(
) are shown on the regression smoothings
and
in red and blue, respectively.
7. Conclusions
In classical mathematical analysis, the concept of locality is based on a passage to the limit and thus has an infinitesimal character. For this reason, solving the problem of finding trends for piecewise smooth functions is reduced to determining segments of constant sign of the derivative.
In a discrete case, within the framework of DMA, a comparative, fuzzy, multiscale perception of locality is natural and important. It is this perception of locality that is important for analyzing discrete data and understanding the dynamics of the processes that these data express.
Solving the problem of determining trends in discrete time series provides only a preliminary fragmentation of the process. Without identifying the relationship between trends, a deep understanding of the dynamics of the process, which is obtained by constructing a hierarchy of trends and extremes, is impossible.
The theoretical justification of the regression approach to differentiation presented in the work allows, firstly, to give an answer to the question: “What is discrete differentiation”, and secondly, outlines a path for solving the problem of trends at different scales within the framework of the classical approach. It consists in transferring to the continuous case the discrete solution of the trend problem proposed in this work based on measures of trend and extremum by replacing the sum in constructions (
5) and (
6) with an integral. The efficiency of the discrete solution allows us to hope for success in the continuous case.
About future plans for our research announced in
Section 6, we add the following
A comparative analysis of the solution to the trend problem based on the scale parameter p at a fixed viewing radius with the solution to the trend problem based on the viewing radius r at a fixed scale parameter .
The trend measures and are very convenient for comparing records f and on scales p and : such a comparison can be any functional distance between fuzzy measures and on the general domain of their definition T. The fuzzy weight of the comparison depends on the researcher. The general conclusion for the set will give a final comparison of a new type between records f and , which is a measure of similarity that can serve as the basis for clustering on records.
The last direction of further research by the authors, similar to the study of wavelet spectra, is related to the migration of extrema [
18,
32,
33]. It involves two stages: the construction of chains of migration of extrema and their subsequent multifractal analysis (Gibbs sums, scaling exponent, Hölder index). The stage of constructing chains of migration of extremes is described in the proposed article.
In conclusion, we note the following. Regression motives in the analysis of discrete series are present, in particular, in the form of F-transformations (more precisely,
-transformations for differentiating a series). Following Zadeh’s principle of incompatibility, they are focused on data analysis for the purpose of decision making. Thus,
F-transformations during localization do not deal with the entire family of proximity measures
but only with a certain sample
, where
to effectively simplify calculations [
34].