Next Article in Journal
Minimization of the Compliance under a Nonlocal p-Laplacian Constraint
Next Article in Special Issue
Efficient and Effective Directed Minimum Spanning Tree Queries
Previous Article in Journal
A Novel Fractional-Order RothC Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discrete Integral and Discrete Derivative on Graphs and Switch Problem of Trees

by
M. H. Khalifeh
* and
Abdol-Hossein Esfahanian
Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824, USA
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(7), 1678; https://doi.org/10.3390/math11071678
Submission received: 3 February 2023 / Revised: 23 March 2023 / Accepted: 27 March 2023 / Published: 31 March 2023
(This article belongs to the Special Issue Advances in Graph Theory: Algorithms and Applications)

Abstract

:
For a vertex and edge weighted (VEW) graph  G  with a vertex weight function  f G  let  W α , β ( G ) = { u , v } V ( G ) [ α f G ( u ) × f G ( v ) + β ( f G ( u ) + f G ( v ) ) ] d G ( u , v )  where,  α , β  and  d G ( u , v )  denotes the distance, the minimum sum of edge weights across all the paths connecting  u , v V ( G ) . Assume  T  is a VEW tree, and  e   E ( T )  fails. If we reconnect the two components of  T e  with new edge  ϵ e  such that,  W α , β ( T ϵ \ e = T e + ϵ )  is minimum, then  ϵ  is called a best switch (BS) of  e  w.r.t.  W α , β . We define three notions: convexity, discrete derivative, and discrete integral for the VEW graphs. As an application of the notions, we solve some BS problems for positively VEW trees. For example, assume  T  is an  n -vertex VEW tree. Then, for the inputs  e   E ( T )  and  w , α , β   + , we return  ϵ T ϵ \ e , and  W α , β ( T ϵ \ e )  with the worst average time of  O ( log n )  and the best time of  O ( 1 )  where  ϵ  is a BS of  e  w.r.t.  W α , β  and the weight of  ϵ  is  w .

1. Introduction and Notations

Unlike Euclidean space, which is equipped with coordinate systems, we cannot stay at a network node and visualize where we are and how to achieve our goal. On the other hand, analyzing a network using local information can lower the cost of tasks [1,2,3,4,5,6]. Preprocessing may help to establish a network coordinate for local tasks and reach the destination step by step.
This paper defines discrete derivative, discrete integral, and convexity notions for vertex and edge-weighted graphs, which will help with local tasks. To do that, we choose the common definition of distance for edge-weighted graphs in the literature, which can be generalized or modified to satisfy metric properties. Applying the notions above, we design and solve some tree-related problems.
Unless otherwise stated, we assume a graph is weighted (vertices and edges) and connected without loops or multiple edges. Then, by saying that a graph  G  is  ( f G , w G ) -weighted, the vertex weight function is  f G  and the edge weight function is  w G . The weight of a path  P G ( v 1 , v n ) = v 1 v 2 v n  in  G  is  i = 1 n 1 w G ( v i v i + 1 ) . And  d G ( u , v )  denotes the distance between  u , v   V ( G ) ,  that is, the weight of the minimum weight paths across all the paths connecting  u  and  v . Note that the distance of a vertex with itself is zero, and the distance between two vertices is infinite if they are not connected with a path. By saying a graph is positively weighted, we mean both vertex and edge weights are positive. We define the following numerical invariant for a  ( f G , w G ) -weighted graph  G ,
W α , β ( G ) = { u , v } V ( G ) [ α f G ( u ) × f G ( v ) + β ( f G ( u ) + f G ( v ) ) ] d G ( u , v ) ,           α , β .
And for  A ,   B V ( G ) , and  v V ( G ) , we define
f G ( A ) = x A f G ( x ) ,                     σ G ( v ) = x V ( G ) d G ( v , x ) ,                           h G ( v ) = x V ( G ) f G ( x ) d G ( v , x ) ,  
d G ( A , B ) = a     A b     B d G ( a , b ) .
And for the given subgraphs  H  and  K  of  G , let:
d G ( H , K ) = u     V ( H ) v     V ( K ) d G   ( u , v ) ,                                           f G ( H ) = a V ( H ) f G ( a ) .                                        
Definition 1.
For a tree  T  and  u v E ( T ) T { u v } has two components. We name the components  T u v  and  T v u  with  u V ( T u v )  and  v V ( T v u )  . As an extension for the later notation to subtrees, if  x y V ( T u v ) [ T u v ]   x y  and  [ T u v ]   y x  denotes the components of  T u v { x y }  and so on. Also, set  n T ( T u v ) = | V ( T u v ) |  and  n T ( T v u ) = | V ( T v u ) | .
By Definition 1  V ( T ) = V ( T u v ) V ( T v u )  and  n T ( T u v ) + n T ( T v u ) = | V ( T ) | . We remind the reader that  N G ( x )  denotes a vertex  x ′s neighbors.
Using the above notation for a tree  T  let,  N T = { ( n T ( T u v ) , n T ( T v u ) ) } u v E ( T )  and  F T = { ( f T ( T u v ) , f T ( T v u ) ) } u v E ( T ) . For more detail  n T ( T u v )  and  n T ( T v u )  are the number of vertices in  T u v  and  T v u , respectively. And  f T ( T u v )  and  f T ( T v u )  are the total weight of vertices in  T u v  and  T v u , respectively.
For ease and based on the definition of  W α , β , we define two more distance-based numerical graph invariants for a  ( f G , w G ) -weighted graph  G  as follows:
W × ( G ) = { u , v } V ( G ) [ f G ( u ) × f G ( v ) ] d G ( u , v ) ,          
W + ( G ) = { u , v } V ( G ) [ f G ( u ) + f G ( v ) ] d G ( u , v ) .
Indeed,  W 1 , 0 ( G ) = W × ( G ) W 0 , 1 ( G ) = W + ( G )  and  W α , β ( G ) = α W × ( G ) + β W + ( G ) .
Using the notations, the coordinate of  T  is shown and defined with
Q T = { F T , N T , W × ( T ) , W + ( T ) }
Assume  T  is a  ( f T , w T ) -weighted tree and  e E ( T ) . If  e  fails (removed from the edge set), then we have  T e  that has two components. If we reconnect the two components of  T e  with an edge,  ϵ , we have  T + ϵ e . For ease, we will denote  T + ϵ e  by   T ϵ \ e . Then  ϵ  is called a switch of  e  and the tree  T ϵ \ e  is called a switch. Moreover, if  e ϵ  and  W α , β ( T ϵ \ e )  is minimum,  ϵ  is called a best switch (BS) of  e  w.r.t.  W α , β . Note that  T ϵ \ e  is a tree,  e E ( T ) , and  ϵ E ( T c ) .
Now suppose  T  is an  n -vertex and positively weighted tree with coordinate  Q T . We are interested in solving the following problem. Get  T Q T e E ( T )  and  w , α , β 0  and return  T ϵ \ e ,  Q T ϵ \ e ,  ϵ  and  W α , β ( T ϵ \ e ) , where  ϵ  is a BS of  e  w.r.t.  W α , β  and  w = w T ϵ \ e ( ϵ ) . (by  w , α , β 0  we mean  w > 0 ,   α + β > 0 α 0 , and  β 0 ). We solve the problem with the worst average time of  O ( log n )  and the best average time of  O ( 1 ) . Clearly, using the output of the problem, if we solve the problem with some complexities, we can keep getting  e i  and  w i + 1 , α i + 1 , β i + 1 0  and solve the problem on  ( ( T ϵ 1 \ e 1 ) ϵ 2 \ e 2 ) ϵ i \ e i  with the same complexities on  T i > 1 . Changing edge weights within the positive range are also supported in  O ( 1 ) -time per change to update the coordinates  Q T ϵ i \ e i  at any meaningful stage. The mentioned problem is the primary example that will solve in the application section as an application of our tools in the following section. The problem can be seen as a generalization of [7,8,9] that has found some applications [6,10,11,12]. For some relevant optimization problems over spanning trees, see [13,14,15,16,17,18,19]. This paper uses extensive notation for more precise interaction, which may take some time to get used to. But our solutions are natural, utilizing some calculus intuition behind the notations.
We might be able to partially compare our solution for the above problem with top tree methods [7,8,9] (see the final section). Generalizing the top tree method solves the problem above for some specified edges and fixed  α , β  with the same average complexity as our solution. However, by changing  α , β ,  in the top tree method, one needs to repeat the preprocessing, which is expensive. We also compare our method to the swap edge problem of spanning trees [20,21], which requires a conversion. See the last section for more details.
As the main topics of this paper, we have the following sections. We will define some concepts and achieve some results. Applying the results, we find an efficient solution for BS of positively weighted trees mentioned above. Our method is general, and we have applied it to other problems [22].

2. Discrete Derivative and Integral

Discrete derivative, discrete integral, and convexity are three intuitive concepts for weighted graphs that we define in this section. We have some results and general examples specifically for trees.
Definition 2.
Suppose  G  is a  ( f G , w G ) -weighted graph. The discrete derivative of a vertex  a V ( G )  toward  x N G ( v )  is defined and denoted as follows:
f G ( a x ) = { f G ( x ) f G ( a ) d G ( x , a ) d G ( x , a ) 0 , f G ( x ) f G ( a ) d G ( x , a ) = 0 .
We say  f G  is consistent if  d G ( x , a ) = 0  results in  f G ( a x ) = 0 ,  a x E ( G ) .
Using Figure 1 as graph  G f G ( a x ) = 4 3 3 = 1 3  where  d G ( x , a ) = 3 . And  f G ( x a ) = 3 4 3 = 1 3 .
Lemma 1.
Assume  f 1  and  f 2  are two vertex weight functions for a graph. If  f = α f 1 + β f 2 , then  f = α f 1 + β f 2 , where  α , β . Moreover, if  f 1  an  f 2  are consistent, then  f    also is.
A path,  v 1 v 2 v n , is called  f G -decreasing if it is a path and
f G ( v i + 1 ) < f G ( v i ) ,                     0 < i < n .
An increasing path is defined in the same sense. Clearly, if  v 1 v 2 v n  is a  f G -decreasing path and edge weights are positive, then  f G ( v i v i + 1 ) < 0 0 < i < n .
Definition 3.
Suppose  G  is a  ( f G , w G ) -weighted graph. Then,  v V ( G )  is called  f G -Root or  f G -Roof of  G  if  f G ( v )  is minimum or maximum, respectively. Let  A V ( G )  is the set of  f G -Roots ( f G -Roofs) of  G . Then we say  G  is  f G -convex (concave) if and only if for every  v V ( G )  there is a decreasing (increasing) path from  v  to some  a A .
Whenever there is no confusion, we may use Root instead of,  f G -Roots, etc. Using the above image, the vertex with weight 0 is a Root, and the vertex with weight 4 is a Roof in the depicted graph.
To employ discrete derivatives as a tool, we first create some induced weight functions using the current weight of the graph for the vertices of an  ( f G , w G ) -weighted graph,  G . The first one is as follows:
σ G : V ( G )                           S . t .                           σ G ( v ) = x V ( G ) d G ( v , x ) ,           v V ( G )
Let  T  be a  ( σ T , w T ) -weighted tree (the vertex-weight function is,  σ T ). If   a x E ( T )  and  w G ( a x ) = 0 , then  σ T ( a x ) = 0 , otherwise:
  σ T ( a x ) = σ T ( x ) σ T ( a ) d T ( x , a ) = σ T ( x ) σ T ( a ) w ( a x ) = n T ( T a x ) n T ( T x a ) .      
Because
σ T ( x ) σ T ( a ) = w ( a x ) [ n T ( T a x ) n T ( T x a ) ] .        
Another vertex-weight function for  G  is induced using the primary weight functions as follows:
h G : V ( G )                           S . t .                           h G ( v ) = x V ( G ) f G ( x ) d G ( v , x ) ,           v G .
To help with the computation of  h T s for a  ( h T , w T ) -weighted tree  T , let  a V ( T )  and  x  be an  a ′s neighbor. Thus,
h T ( a ) = v V ( T a x ) f T ( v ) d T ( a , v ) + v V ( T x a ) f T ( v ) [ d T ( x , v ) + w ( a x ) ] ,          
      h T ( x ) = v V ( T a x ) f T ( v ) [ d T ( a , v ) + w ( a x ) ] + v V ( T x a ) f T ( v ) d T ( x , v ) .    
Using the above equations,  h T ( x ) h T ( a ) = w ( a x ) [ f T ( T a x ) f T ( T x a ) ] . Therefore, if  w T ( a x ) = 0 , then  h T ( a x ) = 0 , otherwise:
    h T ( a x ) = h T ( x ) d T ( a ) d T ( x , a ) = w ( a x ) [ f T ( T a x ) f T ( T x a ) ] w ( a x ) = f T ( T a x ) f T ( T x a ) .      
Using Equations (3) and (4),  h T  and  σ T  are consistent for any tree  T .
Let  N T = { ( n T ( T u v ) , n T ( T v u ) ) } u v E ( T )  and  F T = { ( f T ( T u v ) , f T ( T v u ) ) } u v E ( T ) . Using Equations (2) and (5),  F T , and  N T , one can compute  σ T  and  h T  easily. If  f T ( v ) = 1  for every  v V ( T ) ,  then  ( f T ( T u v ) , f T ( T v u ) ) = ( ( n T ( T u v ) , n T ( T v u ) )  for every  u v E ( T ) . The following algorithm extracts  F T  for a tree  T , starts with the edges adjacent to a leaf and then deletes the leaves at each step until the edge set is empty. It is linear regarding the number of vertices and uses the fact that for every  u v E ( T )  (Algorithm 1):
f T ( T u v ) + f T ( T v u ) = f T ( T ) , f T ( T v u ) = f T ( v ) + x N T ( v ) \ u f T ( T x v ) .  
Algorithm 1 Computing F T or N T
Input: Adjacency list of an n -vertex, nontrivial f T , w T -weighted tree T and f T T
Output:  F T = f T T u v , f T T v u u v E T
Initiate with S = a V T   | deg a = 1
     w h i l e   E T ϕ
               L e a v e s S
              S = ϕ
                f o r   v l e a v e s
                        u N v
                       ( f T T u v , f T T v u )   f T T f T v ,   f T v
                        f T u + = f T v
                        i f deg u 1 = 1 .  #Will be a leaf
                                S = S u
                        T = T v     #Remove v from adj list
Corollary 1.
For an  n -vertex tree  T , the sets,  N T  and,  F T  can be computed in  O ( n ) .
Corollary 2.
Let  T  be a  ( f T ,     w T ) -weighted tree with  n  vertices and  g T = α σ T + β h T  for some,  α , β . One can compute and sort  { g T ( a x ) |   x N T ( a ) }  for all  a V ( G )  with the time complexity of  O ( v V ( T ) d e g ( v ) l o g deg ( v ) ) , that is on average  O ( n ) .
The following result gives us an important fact about the weighted trees.
Lemma 2.
Suppose  P = v 1 v 2 v n n > 2 , is a path in a  ( f T , w T ) -weighted tree  T . If  w T > 0  and  σ T ( v 1 v 2 ) 0 , then  v 2 v n  is σ T -increasing. If  f T > 0 w T > 0 , and  h T ( v 1 v 2 ) 0 , then  v 2 v n + 1  is h T -increasing. If  g T = α σ T + β h T , (for  α , β 0 ,   α + β 0 )   f T > 0 w T > 0 , and  g T ( v 1 v 2 ) 0 , then  v 2 v n  is,  g T -increasing.
Proof. 
Assume  T  is a  ( f T , w T ) -weighted tree and  x y z  is a path in  T . Without loss of generality, we can prove the lemma for  x y z . We know that  V ( T u v ) V ( T v u ) = V ( T )  and  V ( T u v ) V ( T v u ) =  for any  u v E ( T ) . Thus, for some  A V ( T ) ,
V ( T y z ) = V ( T x y ) { y } A ,
and
V ( T z y ) = V ( T y x ) { y } A .
Therefore,
V ( T x y ) V ( T y z )   and   V ( T y x ) V ( T z y )
We use Equations (2) and (5) for computing  σ T  and  h T . Then by the assumption  σ T ( x y ) = n T ( T x y ) n T ( T y x ) 0 . Thus by Equation (8)  σ T ( y z ) = n T ( T y z ) n T ( T z y ) > 0 . Since  w T > 0 w ( y z ) σ T ( y z ) > 0 . That is,  σ T ( z ) > σ T ( y )  which completes the proof for  σ T  case. Similarly, assume  h T ( x y ) = f T ( T x y ) f T ( T y x ) 0 . If  f T > 0 , by Equation (8)  h T ( y z ) = f T ( T y z ) f T ( T z y ) > 0 . If, in addition  w T > 0 , then  w ( y z ) h T ( y z ) > 0 . Thus, by Definition 2,  h T ( z ) > h T ( y ) . That completes the proof for  h T  case. The proof for the  g T  case is like  h T  case. This completes the proof. □
Using Lemma 2, we attained the following result regarding the convexity of a tree.
Theorem 1.
Any positively edge-weighted tree is  σ -convex, and any positively weighted tree is ( α σ + β h ) -convex, for  α , β 0 .
Proof. 
Let us have a positively edge-weighted tree with at least three vertices. Assume a vertex,  v 1 , is not a  σ -Root and  v 1 v 2 v n  is the shortest possible path (in terms of the number of edges) between  v 1  and a  σ -Root,  v n . Assume toward a contradiction that  v 1 v 2 v n  is not decreasing. Thus, suppose  v i v i + 1  is the first edge from  v 1  toward  v n , such that  σ ( v i )   σ ( v i + 1 ) . Since the edge weights are positive,  σ ( v i v i + 1 ) 0 . Then, by Lemma 2, the path  v i + 1 v i + 2 v n  is increasing. That is,  σ ( v i )   σ ( v n )  and  i < n . If  σ ( v i ) = σ ( v n ) v i  also is a  σ -Root and  v 1 v 2 v i  is shorter than  v 1 v 2 v n  while having the same specification, which is a contradiction with  v 1 v 2 v n  being the shortest. And if,  σ ( v i ) < σ ( v n ) , contradicts with,  v n  being a  σ -Root. Therefore,  v 1 v 2 v n  is decreasing. Since  v 1  was arbitrary, the tree is  σ -convex. For the proof of  α σ + β h  case, use the same strategy as for  σ , along with the positivity of the vertices’ weight. □
Corollary 3.
Assume that  g = α σ + β h α , β 0 ,     α + β 0 . Then, a positively weighted tree has at most two  g -Roots. If  x  and  y  are two  g -Roots of a tree, then they are adjacent, and  g ( x y ) = g ( y x ) = 0 . Moreover, if  c  is a tree’s  g -Root and  v  is a non-Root neighbor of  c , then  g ( c v ) > 0 , and  g ( v c ) < 0 . Usefully, if  P = v 1 v 2 v m  is a decreasing path between  v 1  and a  g -Root,  v m , then  g ( v i a ) < 0  if and only if  a = v i + 1 i < m . In addition,  P  is the longest  g -decreasing path starting from  v 1 . Finally, a leaf is not a  ( α σ + β h ) -Root of a positively weighted tree with more than 2 vertices.
Definition 4.
Suppose  G  is a  ( f G , w G ) -weighted graph. Regarding  f G  and  d G , we define and denote the discrete integral of  f G  along a path,  P = v 1 v 2 v n  from  v 1  to  v n  as follows:
P f G d G = i = 1 n 1 f G ( v i v i + 1 ) . d G ( v i , v i + 1 ) .
If  a , b V ( G )  and  P G ( a , b )  is an arbitrary path between  a  and  b ,  we denote  P G ( a , b ) f G d G  by
a b f G d G
By Definitions 2 and 4, we have:
Theorem 2.
Suppose  G  is a connected  ( f G , w G ) -weighted graph and  a , b V ( G ) . If  f G  is consistent, then for any path from  a  to  b ,
a b f G d G = f G ( b ) f G ( a ) .
Using Figure 1 as graph  G  with the given weights and the above result or Definition 2.12, one can see that,
x y f G d G = P = x a y   f G d G = P = x a b y   f G d G = P = x a c b y   f G d G = P = x c b y   f G d G = P = x c a y   f G d G = P = x c a b y   f G d G = 2 .

3. Some Applications of Discrete Integral and Derivative and Convexity

In this section, we formally define three optimization problems on trees. As an application of the results of the previous section, we will have efficient solutions to the problems with some possible comparisons with existing solutions at the end. The compared problems are not clearly defined in the same way we do. We try to convert them for some partial comparison.
The routing cost of  G  for a given set of sources  S V ( G )  and the total distance of  G  are defined as follows, respectively:
R C ( G , S ) = v V ( G ) s S d G ( v , s ) ,                           D ( G ) = 1 2 v V ( G ) v V ( G ) d G ( v , s ) .    
The Wiener index,  W ( G ) , of a simple graph  G ( f G = 1 , w G = 1 )  is equal to its total distance [23,24]. By setting  α , β ,   w G , and  f G W α , β , defined in the introduction, can produce any of  D W   W +   W × , or  R C  for a graph,  G . So,  W α , β  generalizes all the mentioned graph invariants.
Definition 5.
Let  T  be a weighted tree and  T ϵ \ e = T e + ϵ , where  e E ( T )  and  ϵ E ( T c ) .
  • We say   ϵ E ( T c )   is a switch of   e E ( T ) , if   T ϵ \ e   is a tree. Then   T ϵ \ e   is called a switch.
  • We say  ϵ E ( T c )  is a BS of  e E ( T )  w.r.t.  W α , β  if  W α , β ( T ϵ \ e )  is minimum, i.e.,  W α , β ( T ϵ \ e ) = min s E ( T c )   W α , β ( T s \ e ) .
  • The coordinate of  T  is shown and defined with  Q T = { F T , N T , W × ( T ) , W + ( T ) } .
Based on the BS definition, we define three problems. The second and third problems are created merely for comparison. Assume  T  is a weighted tree with  E ( T ) = { e i } i = 1 n  and we have coordinated,  Q T .
Problem1:
Input:
1.  T ,
2.  Q T ,
3.  e E ( T ) ,
4.  w , α , β 0 .
Output:
1.  ϵ E ( T c ) , where  ϵ  is a BS of  e  w.r.t.  W α , β  with  w T ϵ \ e ( ϵ ) = w .
2.  W α , β ( T ϵ \ e ) ,
3.  T ϵ \ e ,
4.  Q T ϵ \ e .
Problem2:
Input:
1.  T  with  E ( T ) = { e i } i = 1 n
2.  Q T ,
3.  { w i , α i , β i 0 } i = 1 n .
Output:
1.  r ,
2.  ϵ r E ( T c ) ,
3.  T ϵ r \ e r ,
4.  Q T ϵ r \ e r ,
5.  W α r , β r ( T ϵ r \ e r ) , where  W α r , β r ( T ϵ r \ e r ) = m i n { W α i , β i ( T ϵ i \ e i ) } i = 1 n  and  ϵ i  is a BS of  e i  w.r.t.  W α i , β i  with  w i = w T ϵ i \ e i ( ϵ i ) . (Overall best switch).
Problem3:
Input:
1.  T  with  E ( T ) = { e i } i = 1 n ,
2.  Q T ,
3.  { w i , α i , β i 0 } i = 1 n .
Output:
1.  { ( ϵ i E ( T c ) , W α i , β i ( T ϵ i \ e i ) ) } i = 1 n ,
where  ϵ i  is a BS of  e i  w.r.t.  W α i , β i  with  w i = w T ϵ i \ e i ( ϵ i ) . (All best switches).
Definition 6.
For a numerical graph invariant,  W * , and a switch  T s \ t    let
W * ( T s \ t   ) = W * ( T s \ t   ) W * ( T ) .
Lemma 3.
Let  T    be a  ( f T , w T ) -weighted tree,  x y E ( T ) , and  g T = α σ T + β h T    for some  α , β . Using F T  and  N T , we can calculate  g T x y  on the path from  x  toward any vertex of  T x y . A similar argument holds for a path from  y  toward any vertex of  T y x .
Proof. 
Let  T  be a  ( f T , w T ) -weighted tree and  x y E ( T )  with  T { x y } = T x y T y x  and  g T = α σ T + β h T α , β . Assume  v 1 v 2 v n  is a path in  T x y , with  v 1 = x . Using Equations (1), (2), (5), and  { N T   , F T } = { ( n T ( T u v ) , n T ( T v u ) ) , ( f T ( T u v ) , f T ( T v u ) ) } u v E ( T )  we could extract,  σ T  and  h T , and so  g T = α σ T + β h T  on  v 1 v 2 v n . However, we need to get  g T x y  through  v 1 v 2 v n . To do so, one can check for  v 1 v 2 v n ,  
( n T x y ( [ T x y ] v i v i + 1 ) ,   n T x y ( [ T x y ] v i + 1 v i ) ) = ( n T ( T v i v i + 1 ) n T ( T y x ) ,         n T ( T v i + 1 v i ) ) ,       0 < i < n ,
( f T x y ( [ T x y ] v i v i + 1 ) ,       f T x y ( [ T x y ] v i + 1 v i ) ) = ( f T ( T v i v i + 1 ) f T ( T y x ) ,         f T ( T v i + 1 v i ) ) .     0 < i < n .  
Thus, using Equations (1), (2) and (5), along with (9) and (10):
σ T x y ( v i v i + 1 ) = σ T ( v i v i + 1 ) n T ( T y x ) ,                           h T x y ( v i v i + 1 ) = h T ( v i v i + 1 ) f T ( T y x ) .
That is, using  F T  and  N T , we can compute  g T y x ( v i v i + 1 )  as follows:
g T x y ( v i v i + 1 ) = α σ T ( v i v i + 1 ) + β h T ( v i v i + 1 ) α n T ( T y x ) β f T ( T y x ) .
Symmetrically, the argument holds for the  T y x  and  y  case. □
It is straightforward but involves some long computation to find out the following.
Lemma 4.
Suppose  T  is a  ( f T , w T ) -weighted tree with  u v E ( T )  and  x y E ( T c ) . If  x V ( T u v )  and  y V ( T v u ) , then
W α , β ( T x y \ u v ) = u x p d T u v + v y q d T v u + F N α , β ( u v ) [ w T x y \ u v ( x y ) w T ( u v ) ] ,
where,
F N α , β ( u v ) = [ α f T ( T u v ) f T ( T v u ) + β ( n T ( T u v ) f T ( T v u ) + f T ( T u v ) n T ( T v u ) ) ] , α , β R
and
p ( a ) = [ α f T ( T v u ) + β n T ( T v u ) ] h T u v ( a ) + β f T ( T v u ) σ T u v ( a ) , a V ( T u v ) ,
q ( b ) = [ α f T ( T u v ) + β n T ( T u v ) ] h T v u ( b ) + β f T ( T u v ) σ T v u ( b ) , b V ( T v u ) .
Proof. 
Assume  T  is a tree. If we remove  u v E ( T )  and connect,  T u v  and  T v u  with  x y E ( T c ) ,  where  x V ( T u v )  and  y V ( T v u ) , then
W α , β ( T x y \ u v ) = W α , β ( T u v ) + W α , β ( T v u ) + a V ( T u v ) b V ( T v u ) [ α f T ( a ) f T ( b ) + β ( f T ( a ) + f T ( b ) ) ] . [ d T ( x , a ) + w T x y \ u v ( x y ) + d T ( b , y ) ]
= W α , β T u v + W α , β T v u + α f T T v u + β n T T v u h T u v x + α f T T u v + β n T T u v h T v u y + β [ f T T v u σ T u v x + f T T u v σ T v u y + w T x y \ u v x y [ α f T T u v f T T v u + β n T T u v f T T v u + f T T v u f T T u v ]
Using the fact that  W × ( T u v \ u v ) = W × ( T )  and the above formula,
W α , β ( T x y \ u v ) = W α , β ( T x y \ u v ) W α , β ( T ) = [ p ( x ) p ( u ) ] + [ q ( y ) q ( v ) ] + F N α , β ( u v ) [ w T x y \ u v ( x y ) w T ( u v ) ] .
where  F N α , β ( u v ) = [ α f T ( T u v ) f T ( T v u ) + β ( n T ( T u v ) f T ( T v u ) + f T ( T u v ) n T ( T v u ) ) ]  and
p ( a ) = [ α f T ( T v u ) + β n T ( T v u ) ] h T u v ( a ) + β f T ( T v u ) σ T u v ( a ) , a V ( T u v ) ,
q ( b ) = [ α f T ( T u v ) + β n T ( T u v ) ] h T v u ( b ) + β f T ( T u v ) σ T v u ( b ) , b V ( T v u ) .
Since  T u v  and  T v u  are connected, and  p  and  q  are consistent; by Lemma 1 and Theorem 2, we have:
W α , β ( T x y \ u v ) = u x p d T u v + v y q d T v u + F N α , β ( u v ) [ w T x y \ u v ( x y ) w T ( u v ) ] .
The above lemma gives a clue between a BS of  u v V ( T )  w.r.t  W α , β  and the  p -Roots and  q -Roots of  T u v  and  T v u . Often, finding a BS is equivalent to finding some Roots.
Lemma 5.
Suppose  T  is a positively weighted tree with at least three vertices and  u v E ( T ) . If  a  and  b  are a  p -Root of  T u v  and a  q -Root of  T v u , respectively, then a BS of  u v  w.r.t  W α , β  is  x y  with  x N T ( a ) { a }  and  y N T ( b ) { b } . More precisely, let the following be vertex weight functions for the relevant vertices:
p ( c ) = [ α f T ( T v u ) + β n T ( T v u ) ] h T u v ( c ) + β f T ( T v u ) σ T u v ( c ) , c V ( T u v ) ,
q ( c ) = [ α f T ( T u v ) + β n T ( T u v ) ] h T v u ( c ) + β f T ( T u v ) σ T v u ( c ) , c V ( T v u ) .
Then a choice of  x y  as a BS of  u v  w.r.t  W α , β  is as follows:
A 
If  ( u , v ) = ( p -Root of  T u v q -Root of   T v u ) ,
  • ( x , y ) = ( u , z )  where  q ( z ) = m i n c N T v u ( v ) q ( c ) , or
  • ( x , y ) = ( z , v )  where  p ( z ) = m i n c N T u v ( u ) p ( c ) .
If  q ( v z ) w T ( v z ) < p ( u z ) w T ( u z ) , 1 gives a choice for  x y ; otherwise,  2 .
B 
If  ( u , v ) ( p -Root of  T u v q -Root of   T v u ) , then  ( x , y ) = ( p -Root of  T u v q -Root of   T v u ) .
Moreover,  F T = F T x y \ u v  and  N T = N T x y \ u v  except for the edges on  P T ( u , x )  and  P T ( v , y ) . More precisely, if  P T u v ( u , x ) = u 1 u 2 u m  is the path between  u = u 1  and  x = u m ,
( n T x y \ u v ( T u i u i + 1 ) ,     n T x y \ u v ( T u i + 1 u i ) ) = ( n T ( T u i u i + 1 ) n T ( T v u ) , n T ( T u i + 1 u i ) + n T ( T v u ) ) ,
( f T x y \ u v ( T u i u i + 1 ) ,     f T x y \ u v ( T u i + 1 u i ) ) = ( f T ( T u i u i + 1 ) f T ( T v u ) ,     f T ( T u i + 1 u i ) + f T ( T v u ) , 0 < i < m .
And, if  P T v u ( v , y ) = v 1 v 2 v n  is a path between  v = v 1  and  y = v n ,
( n T x y \ u v ( T v i v i + 1 ) ,       n T x y \ u v ( T v i + 1 v i ) ) = ( n T ( T v i v i + 1 ) n T ( T u v ) , n T ( T v i + 1 v i ) + n T ( T u v ) ) ,
( f T x y \ u v ( T v i v i + 1 ) ,       f T x y \ u v ( T v i + 1 v i ) ) = ( f T ( T v i v i + 1 ) f T ( T u v ) ,     f T ( T v i + 1 v i ) + f T ( T u v ) ) , 0 < i < n .
Proof. 
Assume we have the lemma assumptions. By Equation (11),
W α , β ( T s t \ u v ) = p ( s ) + q ( t ) + r , s V ( T u v )   and   t V ( T v u )
where  r  is independent of the choice of s and  t . And the values of  p ( s )  and  q ( t )  are independent of each other. By definition, if  a V ( T u v )  and  b V ( T v u )  minimize Equation (12) and  a b u v ,  then  a b  is a BS of  u v . One sees that by minimizing Equation (12) inclusively, we do not exclude  u v  as a choice for  a b , since:
m i n s V ( T u v )     p ( s ) + m i n t V ( T v u )     q ( t ) + r = min s t E ( T c ) { u v }   W α , β ( T s t \ u v ) .
Accordingly,  ( a , b )  minimizes Equation (12) if and only if  ( p ( a ) , q ( b ) ) = ( m i n s V ( T u v ) p ( s ) ,   m i n t V ( T v u ) q ( t ) ) . That is,  ( a , b ) = ( p -Root of  T u v q -Root of  T v u ) .
If case A happens, which is  ( u , v ) = ( p -Root of  T u v q -Root of  T v u )  and also  ( x , y ) = ( p -Root of  T u v q -Root of  T v u ) , then we might have  x y = u v . To exclude  u v  in the minimization of Equation (12), we avoid the case that  u  and  v  appear concurrently by letting  ( p ( x ) , q ( y ) )  is equal to either  ( m i n s V ( T u v ) \ u     p ( s ) ,   m i n t V ( T v u )     q ( t ) )  or  ( m i n s V ( T u v )     p ( s ) ,   m i n t V ( T v u )   \ v   q ( t ) ) . By Theorem 1 and the assumptions  T u v  and  T v u  are convex. That is,
m i n s V ( T u v ) \ u     p ( s ) = m i n s N T u v ( u )     p ( s )   and   m i n t V ( T v u ) \ v     q ( t ) = m i n t N T v u ( v )     q ( t )
As a result, either
1 ( x , y ) = ( u ,     z )   where   q ( z ) = m i n b N T u v ( u )     q ( b )
2 ( x , y ) = ( z ,     v )   where   p ( z ) = m i n a N T u v ( u )     p ( a )
Per Lemma 4, in case 1—, W α , β ( T x y \ u v ) r = u u = x p d T u v + v y q d T v u = v y q d T v u = q ( v z ) w T ( v z )  and in case 2— W α , β ( T x y \ u v ) r = p ( u z ) w T ( u z ) . Thus, if  p ( u z ) w T ( u z ) > q ( v z ) w T ( v z ) , then 1 gives the choice of  ( x , y ) . If  p ( u z ) w T ( u z ) < q ( v z ) w T ( v z ) , then  ( x , y )  comes from 2. Otherwise, either 1 or 2 is the choice.
Now assume  ( u , v ) ( p -Root of  T u v q -Root of  T v u ) . If  ( a , b ) = ( p -Root of  T u v q -Root of  T v u ) , then  ( u , v ) ( a , b ) . Moreover,  ( a , b )  minimizes  W α , β  in Equation (12). So  ( x , y ) = ( a , b )  is the right choice. This completes the proof of B.
To form  T x y \ u v  we remove  u v E ( T )  and reconnect the components of  T u  using  x V ( T u v )  and  y V ( T v u ) u v x y . Thus,  F T F T x y \ u v  and  N T N T x y \ u v  in general. Precisely,  ( n T ( T a b ) , n T ( T b a ) ) ( n T x y \ u v ( T a b ) , n T x y \ u v ( T b a ) )  or  ( f T ( T a b ) , f T ( T b a ) ) ( f T x y \ u v ( T a b ) , f T x y \ u v ( T b a ) )  happens if  a b P T ( u , x )  or  a b P T ( v , y ) . By Definition 1  f T ( T a b ) + f T ( T b a ) = f T ( T )  and  n T ( T a b ) + n T ( T b a ) = | V ( T ) | . And, by Lemma 3 Equations (9) and (10), if  P T u v ( u , x ) = u 1 u 2 u m  is the path between  u = u 1  and  x = u m ,
( n T x y \ u v ( T u i u i + 1 ) ,     n T x y \ u v ( T u i + 1 u i ) ) = ( n T ( T u i u i + 1 ) n T ( T v u ) ,     n T ( T u i + 1 u i ) + n T ( T v u ) ) ,
( f T x y \ u v ( T u i u i + 1 ) ,     f T x y \ u v ( T u i + 1 u i ) ) = ( f T ( T u i u i + 1 ) f T ( T v u ) ,     f T ( T u i + 1 u i ) + f T ( T v u ) ) , 0 < i < m .
And symmetrically, for  P T v u ( v , y ) = v 1 v 2 v n  with  v = v 1  and  y = v n ,
( n T x y \ u v ( T v i v i + 1 ) ,     n T x y \ u v ( T v i + 1 v i ) ) = ( n T ( T v i v i + 1 ) n T ( T u v ) ,     n T ( T v i + 1 v i ) + n T ( T u v ) ) ,
( f T x y \ u v ( T v i v i + 1 ) ,   f T x y \ u v ( T v i + 1 v i ) ) = ( f T ( T v i v i + 1 ) f T ( T u v ) ,   f T ( T v i + 1 v i ) + f T ( T u v ) ) ,   0 < i < n .
Lemma 6.
For a  ( f T , w T ) -weighted tree  T ,
W × ( T ) = u v E ( T ) [ f T ( T u v ) f T ( T v u ) ] w T ( u v ) ,
and,
W + ( T ) = u v E ( T ) [ n T ( T u v ) f T ( T v u ) + n T ( T v u ) f T ( T u v ) ] w T ( u v ) .
Proof. 
We prove the first equation using the double-counting principle. Assume  T  is a copy of  T  with the edge weights set to 0 at the start. Let  u , v V ( T )  and  P T ( u , v )  is the path between  u  and  v . For every  a b P T ( u , v )  add the number  w ( u v ) f T ( a ) f T ( b )  to the corresponding edge of  a b  in  T . Then  u v P T ( u , v ) w T ( u v ) = [ f T ( u ) f T ( v ) ] d T ( u , v ) . Thus, suppose we repeat the above process for every pair  u , v V ( T ) . If so, then
u v E ( T ) w T ( u v ) = { u , v } V ( T ) [ f T ( u ) × f T ( v ) ] d T ( u , v ) = W × ( T ) .  
And on the other hand, one can see that  w T ( u v ) = w T ( u v ) T [ f T ( T u v ) f T ( T v u ) ]  for  u v E ( T ) . Thus,   W × ( T ) = u v E ( T ) w T ( u v ) = u v E ( T ) w T ( u v ) [ f T ( T u v ) f T ( T v u ) ] .  That proves the formula of  W × ( T ) . The proof for the    W +  case is quite similar. □
Corollary 4.
Assume  T    is a  ( f T , w T ) -weighted  n -vertex tree. We can compute  W × ( T )  and    W + ( T )  in  O ( n ) -time. Moreover, if by changing the weight of  a b E ( T )  to  w ,  we get a tree  T , then
  W × ( T ) = W × ( T ) + ( w w T ( a b ) ) f T ( T u v ) f T ( T v u ) ,
W + ( T ) = W + ( T ) + ( w w T ( a b ) ) [ n T ( T u v ) f T ( T v u ) + n T ( T v u ) f T ( T u v ) ]
Remark 1.
Here, we give a simple intuition behind an algorithm for problem 1 when the tree  T  is positively weighted. Using Lemma 5, finding a BS for  u v E ( T ) , we need to have the  p -Roots and  q -Roots of  T u v  and  T v u ,  respectively. By definition  u V ( T u v )  and  v V ( T v u ) . By Corollary 3, the longest decreasing path from  u  ends up with a  p -Root of  T u v  say  x , and similarly, the longest decreasing path from  v  ends up at a  q -Root of T u v say  y  . Moreover, by Corollary 3, those paths are the only decreasing paths in their respective components. So, one can easily take some derivatives to reach from  u  to  x  or from  v  to  y  since the paths are decreasing and unique. Then by Lemma 5, a BS of  u v  is one of the following cases as detailed in the lemma.
  • x y ,
  • an edge between   x   and a vertex of   N T ( y ) ,
  • an edge between  y  and a vertex of  N T ( x ) .
Lemma 5 details how to choose among the above limited cases using derivatives. As we find the unique and decreasing path between   u   and   x   and also   v   and   y , one can update   Q T   as in Lemma 5. Having those paths also eases calculating     W × ,     W + , and     W α , β from Lemma 4. The following proposition details the mentioned intuition with proof of time complexity.
Proposition 1.
Let  T  be an  n -vertex positively weighted tree with a coordinate,  Q T . If we get  e E ( T )  and  w , α , β 0 , then we can compute  ϵ T ϵ \ e Q T ϵ \ e , and  W α , β ( T ϵ \ e ) , with the worst average time of  O ( log n )  and the best average time of  O ( 1 ) , where  ϵ  is a BS of  e  w.r.t  W α , β  and  w T ϵ \ e ( ϵ ) = w . Updating Q T  after edge-weight change to a positive number is supported in  O ( 1 ) -time per change.
Proof. 
Assume we have an  n -vertex positively weighted tree  T Q T = { F T , N T , W × ( T ) , W + ( T ) }  and  u v E ( G ) n > 2 . Let  p  and  q  be some induced vertex weight functions for  T u v  an  T v u  as follows,
p α , β   ( c ) = [ α f T ( T v u ) + β n T ( T v u ) ] h T u v ( c ) + β f T ( T v u ) σ T u v ( c ) , c V ( T u v ) ,
q α , β ( c ) = [ α f T ( T u v ) + β n T ( T u v ) ] h T v u ( c ) + β f T ( T u v ) σ T v u ( c ) , c V ( T v u ) .
Then with the results so far, we have the following:
1. Computing a  p  or  q  in  O ( 1 ) -time. Using  F T  and  N T , Lemma 3, and Definition 2, computing  p ( r s )  costs  O ( 1 ) -time for any  r s P T u v ( u , x )  where  x V ( T u v )  and we know the direction. A similar argument holds for computing  q  on  P T v u ( v , y ) .
2. Checking whether a vertex is a Root on average of  O ( 1 )  -time. By Corollary 3, if  p ( u x ) < 0  for only one  x N T u v ( u ) , then  u  is not a  p -Root. The following can verify whether a vertex is  p  or  q -Root or not. (Let  f = p  or  q )
# Takes a vertex and a f and says it is a f -Root or not
  I S _ R o o t a , f
  F o r   s N a
           i f   f a s < 0
                r e t u r n   F a l s e
                r e t u r n   T r u e
The average degree in a tree is  O ( 1 ) . Thus,  I S _ R o o t  run time is  O ( 1 )  on average.
3. Finding a neighbor with a minimum  p  or  q  in  O ( 1 ) -time on average. If  s N T u v ( a ) , then by Theorem 2 and Definition 4,  p ( s ) = p ( a s ) w ( a s ) + p ( a ) . By 1- computing  p ( s )  takes  O ( 1 ) -time. Moreover,  O ( | N T ( a ) | ) = O ( 1 )  on average. Thus, finding  x  with  p ( x ) = m i n s N T u v ( a ) p ( s )  takes  O ( 1 ) -time on average. Computing min  q  neighbor follows the same rules. One implements the mentioned idea as follows.
# Takes a vertex, a , finds x N a with min f and returns x and a x ( f = p or q )
  F N D _ m i n a , f
  m i n =
  F o r   s N a
                              f s = f a s w a s + f a
                             i f   f s < m i n
                                                           m i n = f s
                                                           x s
  r e t u r n     x   ,   a x
4. Finding P T u v ( u , x )  or  P T v u ( v , y )  in  O ( log n )  on average, where  x  and  y  are  p  and  q -Root. By Theorem 1,  T u v  is  p -convex. Assume that  P = u 1 u 2 u m  is the decreasing path between  u = u 1  and a  p -Root,  u m . By Corollary 3,  P  is the longest decreasing path beginning with  u , and  p ( u i a ) < 0  if and only if  a = u i + 1 a N ( v i ) 0 < i < m . So, starting with  i = 1 ,  we can find  u i + 1  when we have  u i  and compute  P  as follows:
# Takes a vertex, a , and returns c and P a , c where c is a f ( f = p or q )
  R o o t _ P a t h a , f
  P a , c = a 1 = a
  L a b l e
  f o r     a i + 1 N a i \ a i 1
              i f   f a i a i + 1 < 0 .
                             P a , c = P a , c a i + 1
                             a i a i + 1
                             B r e a k   a n d   G o t o   L a b l e
  r e t u r n     a ,   P a , c
For finding  u i + 1  from  u i , we can exclude  u i 1  and jump to  u i + 1  as the process  Root _ Path  does. That reduces the average complexity. Moreover, when we are on the vertex  u i i < m , one takes at most | N ( u i ) |  derivatives to find  u i + 1 , since we know,  u i + 1  is the only vertex with negative derivatives toward it. So,  Root _ Path ( u , p )  returns,  P T u v ( u , x )  by taking some derivatives, at most, as many as the total degree of the vertices on  P T u v ( u , x ) ,  where  x  is a  p -Root. By [25], the path length in a tree is  O ( log n )  on average, where it can be  O ( 1 )  as well. In addition, the average degree in a tree is  O ( 1 ) , and taking each derivative requires  O ( 1 ) -time. Moreover,  x  is a  p -Root (has min  p ) and  T u v < T . Thus, the worst average run time of  Root _ Path ( u , p )  is  O ( log n )  and the best average time is  O ( 1 ) . Similarly,  RCE _ Path ( v , q )  has the same run time to return  P T v u ( v , y )  where  y  is a  q -Root.
5. Computing  u x p d T u v  and  v y q d T v u  in  O ( | P ( u , x ) | ) -time and  O ( | P ( v , y ) | ) -time, respectively. Per Definition 4, Theorem 2, and 1, if we have a path,  P = a 1 a 2 a m , we can compute the integral of  P  in  O ( m ) -time as follows:
# Takes a path P and f and returns P   f   ( f = p or q )
  I n t e g   P = a 1 a 2 a m , f
  a 1 a m f d   = 0
  f o r   0 < i < m
                        a 1 a m f d   + = f a i a i + 1 . w a i a i + 1
  r e t u r n       a 1 a m f d  
6. Updating  F T  to  F T x y \ u v  and  N T  to  N T x y \ u v  in  O ( | P ( u , x ) | ) + O ( | P ( v , y ) | )  time. Per Lemma 5,  U p _ F N ( P ( u , x ) , f T ( T v u ) , n T u ( v ) )  and  U p _ F N ( P ( v , y ) , f T ( T u v ) , n T v ( u ) )  updates  F T  to  F T x y \ u v  and  N T  to  N T x y \ u v , where  U p _ F N  is a process as follows.
# Takes a path and updates F T and N T accordingly
  U p _ F N P = v 1 v 2 v m , F , N
  f o r   0 < i < m
          n T T v i v i + 1 ,   n T ( T v i + 1 v i ) n T T v i v i + 1 N ,   n T ( T v i + 1 v i + N )
         f T T v i v i + 1 ,   f T ( T v i + 1 v i ) f T T v i v i + 1 F ,   f T ( T v i + 1 v i + F )
Using the devised procedures in 1- to 6-, we can implement Lemma 4 and 5, which finds us  ϵ T ϵ \ e Q T ϵ \ e , and  W α , β ( T ϵ \ e ) , for the input  e E ( T )  and  w , α , β 0 , where  ϵ  is a BS of  e  w.r.t  W α , β  and  w T ϵ \ e ( ϵ ) = w . The full detail is presented with the following algorithm: BS algorithm (Algorithm 2). Each line’s average time complexity bound is shown in the results.
Preprocessing-----------------------------------------
Compute Q T = F T , N T , W × T , W + T      # O n     Corollary 1 & Lemma 6
--------------------------------------------------------------------------------------------------------------
Algorithm 2 BS Algorithm
Input: Adj list of a f T , w T -weighted tree T , u v E T , Q T , w , α , β +
Output: x y ,   T x y \ u v , W α , β T x y \ u v , Q T x y \ u v , where x y is a BS of u v w.r.t. W α , β and w T x y = w
---------Finding x , y , P u , x and P v , y where x y is a BS of u v ; (Lemma 5)-----
i f   I S R o o t u , p a n d I S _ R o o t v , q # O 1
   {
           a   ,   u a F N D _ m i n u , p # O 1
           b   ,   v b F N D _ m i n v , q # O 1
                         i f   q v b w T v b < p u a w T u a # O 1
                                              { x , P u , x u , u
                                                      y ,   P v , y b , v b }
            e l s e
                                              { x , P u , x a , u a
                                                      y ,   P v , y v , v }
   }
  e l s e
{ x ,   P u , x R o o t _ P a t h u , p #   O log n   on   ave
y ,   P v , y R o o t _ P a t h v , q } #   O log n   on   ave   #   O log n   on   ave
-------------Computing W α , β T x y \ u v -------( Lemma 4 )-------------------------------------
W α , β T x y \ u v = I n t e g P u , x , p α , β + I n t e g P v , y , q α , β + F N α , β u v w w T u v
# O ( | P T u v u , x | ) + O ( | P T v u v , y | )
W α , β T = α W × T + β W + T                                                                                           # O 1
W α , β T x y \ u v = W α , β T x y \ u v + W α , β T                                                             # O 1
--------   Q T       Q T x y \ u v ---------updating Q T --------( Lemma 5 )-----------------------------
   U p _ F N P u , x , f T T v u , n T T v u U p _ F N P v , y , f T T u v , n T T u v
#   O ( | P T u v u , x +   | P T v u v , y )
W 1 , 0 T x y \ u v = W × T x y \ u v = I n t e g P u , x , p 1 , 0 + I n t e g P v , y , q 1 , 0 + F N 1 , 0 u v w w T u v
  #   O ( | P T u v u , x +   | P T v u v , y )
W 0 , 1 T x y \ u v = W + T x y \ u v = I n t e g P u , x , p 0 , 1 + I n t e g P v , y , q 0 , 1 + F N 0 , 1 u v w w T u v
  #   O ( | P T u v u , x +   | P T v u v , y )
W × T x y \ u v = W × T x y \ u v W × T # O 1
W + T x y \ u v = W + T x y \ u v W + T # O 1
--------- T T x y \ u v ---------------
T T + x y u v # O 1
--------------------------------------------------------------------------------------------------------------
Using 1 to 6 above, the average time complexity of every line of Algorithm 2. is bounded by,  max { O ( | P T u v ( u , x ) | , O ( | P T v u ( v , y ) | } . By [25], the average path length of  T  is  O ( log n ) ,  which accounts for the worst average time complexity of the algorithm. The best average complexity can be  O ( 1 )  because  max { O ( | P T u v ( u , x ) | , O ( | P T v u ( v , y ) | }  and the degrees on  P T u v ( u , x )  and  P T v u ( v , y )  can be  O ( 1 ) . Finally, assume  T  is a tree, and we have the preprocessing,  Q T = { F T , N T , W × ( T ) , W + ( T ) } . If we change the weight of one edge, then  F T  and  N T  remain unchanged, and by Corollary 4, we can update    W ×  and    W +  in  O ( 1 ) -time. This completes the proof. □
The subsequent two propositions can be proved based on Proposition 1. They are indeed a solution to Problems 2 and 3, respectively.
Proposition 2.
Let  T  be an   n -vertex positively weighted tree with   E ( T ) = { e i } i = 1 n . If we get   { w i , α i , β i 0 } i = 1 n , then we can find an   r  such that   W α r , β r ( T ϵ r \ e r ) = m i n { W α i , β i ( T ϵ i \ e i ) } i = 1 n  and compute   W α r , β r ( T ϵ r \ e r ) , in an average time of   O ( n log n )  and the best time of   O ( n )  , where   ϵ i  with   w i = w ( ϵ i )  is a BS of   e i  w.r.t.   W α i , β i ,   0 i n .
Proposition 3.
Let  T  be an   n -vertex positively weighted tree with   E ( T ) = { e i } i = 1 n . If we get   { w i , α i , β i } i = 1 n , then we can find a BS,   ϵ i  with   w i = w ( ϵ i )  w.r.t.   W α i , β i  for all   e i  and compute   W α i , β i ( T ϵ i \ e i ) ,   0 i n , in an average time of   O ( n log n )  and the best time of   O ( n ) .
For simplicity, we avoided adding some facts for better performance of the BS algorithm. For example, by Corollary 3, a leaf is not a root of a tree with more than 2 vertices. In the BS algorithm, we either require a root or a neighbor of a root. Thus, if  T u v  or  T u v  are not trivial (single vertex), then the version of  T  without the leaves can be used. That improves the actual performance of the BS algorithm.
Proposition 4.
For a simple tree  T , there are exactly   W ( T )  distinct switches. More precisely,
| { T ϵ \ e   |   ( ϵ , e ) E ( T c T ) × E ( T )     T ϵ \ e   is   a   switch } | = W ( T ) ,
and if   x y E ( T c T )  and   u v E ( T )  ,
| { T x y \ e   |   e E ( T )     T x y \ e   is   a   switch   } | = d T ( x , y ) ,
| { T ϵ \ u v   |   ϵ E ( T c T )     T ϵ \ u v   is   a   switch   } | = n T ( T v u ) . n T ( T u v ) .
Proof. 
Let  T  be a simple tree. Using Definition 5,  x y E ( T c ) E ( T )  is a switch of  e E ( T ) , if and only if  e P T ( x , y ) . Otherwise,  T x y \ e  is disconnected, indicating that it is not a switch. Therefore, there are exactly  d T ( x , y )  distinct switches regarding  x y E ( T c T ) . Thus, the total number of distinct switches is as follows:
| { T x y \ e   |   ( x y , e ) E ( T c T ) × E ( T )     T x y \ e   is   a   switch } | = x y E ( T c T ) . d T ( x , y ) = { x , y } V ( T ) d T ( x , y ) = W ( T ) .
And  | { T x y \ e   |   e E ( T )     T x y \ e   is   a   switch   } | = d T ( x , y ) .  In addition,  T ϵ \ u v  is a tree if and only if  ϵ  is an edge between  T u v  and  T v u . Thus,  | { T ϵ \ u v   |   ϵ E ( T c T )     T ϵ \ u v   is   a   switch   } | = n T ( T v u ) . n T ( T u v ) .  This completes the proof. □
For  u v E ( T )  of a simple tree  T  with  n  vertices by definition,  n 1 n T ( T v u ) . n T ( T u v ) n 2 4 . Also, by [26,27,28],  ( n 1 ) 2 W ( T ) ( n + 1 3 ) .

4. Some Comparisons and Discussion

In this section, we compare our method with some existing methods. For brevity, we mean average time complexity when we talk about complexity. The compared methods are not solving the same problems that we did, but it is possible to convert them with some effort.
The top tree method can find a positively weighted tree’s median(s). It is almost equivalent to finding the Root of a tree in our method, disregarding  α  and  β . Thus, for an  n -vertex tree,  T  and  u v E ( T ) ,  we somehow can convert the top tree methods [7,8,9] to find a BS w.r.t.  W α = 1 , β = 0  in  O ( log n ) -time for most cases. However, if  u v  connects a Root of  T u v  and a Root of  T v u , then the top tree method does not work. They also do not compute  W α , β  of the switches regularly, which does not let us solve problem 2 even for their specific case of  α = 1 , β = 0 . Generalizing the top tree method, we may be able to solve problem 1 for a fixed  α , β  and compute  W α , β , which is overcomplicated intuitively. But, if we renew the initial values of  α , β  to some  α , β , then we have to repeat their  O ( n ) -time preprocessing, which is costly. As we have seen, our method in the BS algorithm solves problem 1 in  O ( log   n )  for any  α , β 0 , computes  W α , β s, and does not require a fresh preprocessing for the change of  α , β . Our method updates the preprocessing,  Q T  in  O ( 1 ) -time after an edge-weight change, whereas using top trees, an edge weight update takes up to  O ( log n ) -time.
For another comparison, we borrow the solution of the best swap edge of multiple-source routing tree algorithms [15,19,20,21,29,30] for problem 3. Suppose  T  is a spanning tree of a positively edge-weighted graph  G  with  m  edges and  n  vertices. Then, in [15,19,20,21,29,30], researchers choose the BS from  E ( G ) E ( T ) , whereas we choose from  E ( T c ) . Therefore, by choosing  G  to be  K n , the mentioned algorithms can solve problem 3. Also, they choose the BS w.r.t. as the routing cost rather than its generalization,  W α , β . Let  S V ( G )  be a set of sources and  | S | > 1 . Setting  f G ( v ) = 1  for all  v S  and  f G ( v ) = 0  for all  v V ( G ) \ S , results in
W 1 , 1 ( G ) = R C ( G , S ) .
That is, by specifying vertices weights as mentioned and letting  α i = 1 , β i = 1  for  1 i n , and  G = K n , algorithms of [20,21] will solve problem 3. The mentioned algorithms are expensive specifically for a single failed edge, and after updating the edge weights or inserting a BS, their  O ( n 2 )  preprocessing is not valid. They also do not compute  W α , β ′s. More importantly, regarding the mentioned conversion, they solve problem 3 in more than  O ( n 2 log 2   n ) -time and up to  O ( n 3 )  depending on the number of sources, where  α i = 1 , β i = 1 ,   1 i n  and sources are specified. In the terminology of [20,21,22], we do not limit the number of sources and,  α i , β i R 0 ,   1 i n  , where we attain an average time of  O ( n   log   n )  and the best time of  O ( n )  for Problem 3.
As a general fact, note that choosing a brute-force strategy and distance matrix for the positively edge-weighted graph, computing  W α , β ( T )  takes  O ( n 3 ) . Moreover, there are between  n 1  to  n 2 4  switches regarding an edge. Therefore, using the bute-force strategy, we will need between  O ( n 4 )  to  O ( n 5 )  time to compute a switch with the minimum  W α , β . While our method executes that in  O ( log   n ) ,  with the same flexibility. This difference is significant. Note that the existing discussed methods are comparable to ours with some constraints, but they have much less flexibility.
Our tools are general and can be applied to similar optimization problems. For instance, see [15], where we use our method for the best swap edge of spanning trees. Another interesting problem can be solving the BS problems regarding  D   k ( T ) = { u , v } V ( T ) ( d ( u , v ) ) k  by considering the weight  σ   k T ( v ) = x V ( T ) d T k ( v , x )  for the vertices of a tree,  T . See Appendix A, where we have some computations related to the derivative and integral of,  σ   2 T .
Finding the minimum  p -sources routing cost spanning tree is an  N P -hard problem for any  p > 1  and is a polynomial problem for  p = 1  [29,30,31].

Author Contributions

Methodology, M.H.K.; Resources, A.-H.E. All authors have read and agreed to the published version of the manuscript.

Funding

This material is based upon work supported by the NSF Program on Fairness in AI in collaboration with Amazon under grant IIS-1939368. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or Amazon.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Let T be a tree and a x E T . Then, using the definition ( σ   k T v = x V T d T k v , x ,   v V T ):
σ   2 T a = v V T a d T 2 a , v + v V T x [ d T 2 x , v + w a x 2 + 2 w a x d T x , v ] ,
σ   2 T x = v V T a [ d T 2 a , v + w a x 2 + 2 w a x d T a , v ] + v V T x d T 2 x , v .
σ   2 T a x = σ   2 T x σ   2 T a w T a x = w a x 2 n T T a x n T T x a + 2 w a x σ T a x a σ T x a x w T a x
= w a x σ T a x + 2 σ T a x a σ T x a x .
Also, one can use the same idea of Equation (11) and see that,
D   2 T x y \ u v = D   2 T x y \ u v D   2 T = 2 σ T u v x σ T v u y σ T u v u σ T v u v + n T T u v . n T T v u w x y w u v + n T T v u   σ   2 T u v x σ   2 T u v u + n T T u v σ   2 T v u y σ   2 T v u v + 2 n T T v u   σ T u v x σ T v u u + 2 n T T u v σ T u v y σ T v u y .
We had that, σ T is consistent, thus by Theorem 2,
u x σ T u v d T u v = σ T u v x σ T u v u ,             and             v y σ T v u d T v u = σ T v u y σ T v u v .
Therefore,
D   2 T x y \ u v = 2 u x σ T u v d T u v + σ T u v u v y σ T v u d T v u + σ T v u v σ T u v u σ T v u v + n T T v u   u x σ   2 T u v d T u v + n T T u v v y σ   2 T v u d T v u + 2 n T T v u   u x σ T u v d T u v + 2 n T T u v v y σ T v u d T v u + n T T u v . n T T v u w x y w u v .

References

  1. Alexopoulos, C.; Jacobson, J.A. State space partition algorithms for stochastic systems with applications to minimum spanning trees. Networks 2000, 35, 118–138. [Google Scholar] [CrossRef]
  2. Aziz, F.; Gul, H.; Uddin, I.; Gkoutos, G.V. Path-based extensions of local link prediction methods for complex networks. Sci. Rep. 2020, 10, 19848. [Google Scholar] [CrossRef] [PubMed]
  3. Dong, Z.; Chen, Y.; Tricco, T.S.; Li, C.; Hu, T. Hunting for vital nodes in complex networks using local information. Sci. Rep. 2021, 11, 9190. [Google Scholar] [CrossRef] [PubMed]
  4. Hamilton, W.L.; Ying, R.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, Red Hook, NY, USA, 4–9 December 2017; pp. 1025–1035. [Google Scholar]
  5. Zhou, T. Progresses and challenges in link prediction. iScience 2021, 24, 103217. [Google Scholar] [CrossRef] [PubMed]
  6. Kanevsky, A.; Tamassia, R.; Battista, G.D.; Chen, J. On-line maintenance of the four-connected components of a graph. In Proceedings of the 32nd FOCS, San Juan, Puerto Rico, 1–4 October 1991; pp. 793–801. [Google Scholar]
  7. Alstrup, S.; Holm, J.; de Lichtenberg, K.; Thorup, M. Maintaining information in fully dynamic trees with top trees. ACM Trans. Algorithm 2005, 1, 243–264. [Google Scholar] [CrossRef]
  8. Alstrup, S.; Holm, J.; Thorup, M. Maintaining center and median in dynamic trees. In Proceedings of the 8th Scandinavian Workshop on Algorithm Theory, Berlin, Germany, 3–5 July 2000; pp. 46–56. [Google Scholar]
  9. Wang, H.-L. Maintaining centdians in a fully dynamic forest with top trees. Discrete App. Math. 2015, 181, 310–315. [Google Scholar] [CrossRef]
  10. Gabow, H.N.; Kaplan, H.; Tarjan, R.E. Unique maximum matching algorithms. J. Algorithms 2001, 40, 159–183. [Google Scholar] [CrossRef]
  11. Holm, J.; de Lichtenberg, K.; Thorup, M. Poly-logarithmic deterministic fully-dynamic algorithms for connectivity, minimum spanning tree, 2-edge and biconnectivity. J. ACM 2001, 48, 723–760. [Google Scholar] [CrossRef]
  12. Nardelli, E.; Proietti, G.; Widmayer, P. Finding all the best swaps of a minimum diameter spanning tree under transient edge failures. J. Graph Algorithms Appl. 2001, 5, 39–57. [Google Scholar] [CrossRef]
  13. Baste, J.; Gözüpek, D.; Paul, C.; Sau, I.; Shalom, M.; Thilikos, D.M. Parameterized complexity of finding a spanning tree with minimum reload cost diameter. Networks 2020, 75, 259–277. [Google Scholar] [CrossRef]
  14. Biloò, D.; Colella, F.; Gualà, L.; Leucci, S.; Proietti, G. An Improved Algorithm for Computing All the Best Swap Edges of a Tree Spanner. Algorithmica 2020, 82, 279–299. [Google Scholar] [CrossRef]
  15. Fischetti, M.; Lancia, G.; Serafini, P. Exact algorithms for minimum routing cost trees. Networks 2002, 39, 161–173. [Google Scholar] [CrossRef]
  16. Gfeller, B.; Santoro, N.; Widmayer, P. A Distributed Algorithm for Finding All Best Swap Edges of a Minimum-Diameter Spanning Tree. IEEE Trans. Dependable Secur. Comput. 2011, 8, 1–12. [Google Scholar] [CrossRef]
  17. Kobayashi, M.; Okamoto, Y. Submodularity of minimum-cost spanning tree games. Networks 2014, 63, 231–238. [Google Scholar] [CrossRef]
  18. Seth, P. Sensitivity analysis of minimum spanning trees in sub-inverse-Ackermann time. In Proceedings of the 16th International Symposium on Algorithms and Computation, Sanya, China, 19–21 December 2005; pp. 964–973. [Google Scholar]
  19. Wu, B.Y.; Lancia, G.; Bafna, V.; Chao, K.-M.; Ravi, R.; Tang, C.Y. A polynomial time approximation scheme for minimum routing cost spanning trees. SIAM J. Comput. 1999, 29, 761–777. [Google Scholar] [CrossRef]
  20. Bilò, D.; Gualà, L.; Proietti, G. Finding Best Swap Edges Minimizing the Routing Cost of a Spanning Tree. Algorithmica 2014, 68, 337–357. [Google Scholar] [CrossRef]
  21. Wu, B.Y.; Hsiao, C.-Y.; Chao, K.-M. The swap edges of a multiple-sources routing tree. Algorithmica 2008, 50, 299–311. [Google Scholar] [CrossRef]
  22. Khalifeh, M.H.; Esfahanian, A.-H. A Faster Algorithm for Finding Best Swap Edges of multiple-source Routing Tree. Networks 2023. submitted. [Google Scholar]
  23. Meyerson, A.; Tagiku, B. Minimizing Average Shortest Path Distances via Shortcut Edge Addition. In Approximation, Randomization, and Combinatorial Optimization, Algorithms and Techniques: 12th International Workshop, APPROX 2009, and 13th International Workshop, RANDOM 2009, Berkeley, CA, USA, August 21–23, 2009, Proceedings; Springer: Berlin/Heidelberg, Germany, 2009; pp. 272–285. [Google Scholar]
  24. Wiener, H. Structural determination of the paraffin boiling points. J. Am. Chem. Soc. 1947, 69, 17–20. [Google Scholar] [CrossRef]
  25. Shen, Z. The average diameter of general tree structures. Comput. Math. Appl. 1998, 36, 111–130. [Google Scholar] [CrossRef]
  26. Dunklemann, P.; Entringer, R.C. Average distance, minimum degree and spanning trees. J. Graph Theory 2000, 33, 1–13. [Google Scholar] [CrossRef]
  27. Roger, C.E.; Douglas, E.J.; Snyder, D.A. Distance in graphs. Czech. Math. J. 1976, 26, 283–296. [Google Scholar]
  28. McKay, B.D. The expected eigenvalue distribution of a large regular graph. Linear Algebra Its Appl. 1981, 40, 203–216. [Google Scholar] [CrossRef]
  29. Bilò, D.; Gualà, L.; Proietti, G. A faster computation of all the best swap edges of a shortest paths tree. Algorithmica 2015, 73, 547–570. [Google Scholar] [CrossRef]
  30. Wu, B.Y. A polynomial time approximation scheme for the two-source minimum routing cost spanning trees. J. Algorithms 2002, 44, 359–378. [Google Scholar] [CrossRef]
  31. Wu, B.Y. Approximation algorithms for the optimal p-source communication spanning tree. Discrete App. Math. 2004, 143, 31–42. [Google Scholar] [CrossRef]
Figure 1. A vertex and edge-weighted graph on 5 vertices.
Figure 1. A vertex and edge-weighted graph on 5 vertices.
Mathematics 11 01678 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khalifeh, M.H.; Esfahanian, A.-H. Discrete Integral and Discrete Derivative on Graphs and Switch Problem of Trees. Mathematics 2023, 11, 1678. https://doi.org/10.3390/math11071678

AMA Style

Khalifeh MH, Esfahanian A-H. Discrete Integral and Discrete Derivative on Graphs and Switch Problem of Trees. Mathematics. 2023; 11(7):1678. https://doi.org/10.3390/math11071678

Chicago/Turabian Style

Khalifeh, M. H., and Abdol-Hossein Esfahanian. 2023. "Discrete Integral and Discrete Derivative on Graphs and Switch Problem of Trees" Mathematics 11, no. 7: 1678. https://doi.org/10.3390/math11071678

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop