1. Introduction
In information theory, entropy measures the amount of uncertainty or randomness in a dataset. It is often used to quantify the amount of information in a message or signal. The entropy of a probability distribution is defined as the negative sum of the probabilities of each event multiplied by the logarithm of the probabilities. A high entropy indicates that the data are highly uncertain or random. In contrast, a low entropy suggests that the data are more predictable or certain.
Graph entropy is a measure used to quantify the randomness or disorder of a graph. It is based on the concept of entropy from information theory. It is used to analyze the structure of complex networks. The entropy of a graph measures the number of ways that the edges of the graph can be rearranged while preserving the number of vertices and edges. Modern information theory was started by Shannon’s seminal work [
1] in the late nineteen-forties and developed the information-theoretic magnitudes for computation skeleton evidence of organic and multifaceted graphs [
2,
3]. Shannon’s entropy formulas were applied to explore a structure’s structural information content. The participation of Rashevsky and Trucco [
4,
5] is closely associated with investigating the living body, e.g., chemical and biological systems, using networks.
Furthermore, entropy measures have been broadly valuable for computer science, biology, and structural chemistry. Widely speaking, to investigate the biological or chemical characteristics of molecular structures, entropic measures are applied. Tan and Wu [
6] introduced the entropy measures, which depend on vertex degree to reveal network heterogeneity. It is used in various fields such as network science, complex systems, computer science, social science, and physics to study the complexity and randomness of the network, classify the network, compare the network with other networks, and identify the most random parts of the network [
7,
8,
9,
10,
11,
12,
13].
There are presently two ways to employ information entropy in chemistry. It is a structural descriptor for determining how complex chemical compounds are, according to information [
14]. Entropy is helpful in this context for categorizing natural products and synthetic compounds [
15,
16], numerically identifying organic molecule isomers [
17], and correlating structural and physicochemical properties. A different application of information entropy is in physicochemical processes. Kobozev proved its usefulness in analyzing physicochemical processes that simulate information transmission as a result. Entropy values were employed to research the chemical processes of molecular compounds by Manzoor et al. [
18]. Abraham et al. obtained the entropy measures by computing systematic expressions of the self-powered degree-based topological indices for three reticular metal–organic frameworks with a zinc-based central unit and stringed binding linkers with varying benzene molecule counts [
19]. Shabbir and Nadeem gave a comprehensive computational analysis of entropies based on degree-based topological indices and carbon nanotube Y-junctions [
20]. Shanmukha et al. investigated the graph entropies of porous graphene using topological indices [
21].
Fractal geometry is a fascinating field that explores self-similar shapes and patterns that can be observed ubiquitously in nature, including trees, rivers, coastlines, mountains, shells, and hurricanes. The study of fractals contributes to our understanding of the inherent beauty and complexity of the natural world but also has far-reaching applications in various scientific disciplines, such as biology, computer networks, physical sciences, and chemical graph theory. Sierpiński-type structures, in particular, have been extensively examined in both fractal theory and practice, with a wealth of research demonstrating their relevance and usefulness across diverse fields [
22,
23,
24].
One significant aspect of fractal structures that warrants further investigation is their entropy, which measures the degree of disorder or randomness within a system. The entropy of fractals has important implications for understanding the thermodynamics, information content, and stability of these complex systems, which can, in turn, impact a wide range of practical applications, such as the design of more efficient data storage and compression techniques, the modeling of complex biological processes, and the optimization of computer networks. Given the ubiquity and significance of fractal structures in nature and their various applications across various scientific fields, a comprehensive study of their entropy is timely and essential. Thus, a thorough exploration of the entropy of fractal structures not only addresses a critical gap in the current literature but also holds the potential to uncover novel insights and advance our understanding of these intricate systems. We hope this paper will serve as a valuable contribution to the ongoing discourse on fractals and their entropy, providing readers with a compelling reason to delve deeper into this intriguing and impactful topic.
Let
be a graph of order
and
t a positive integer. We denote by
the set of words of length
t on alphabet
The letters of a word
u of length
t are denoted by
The chain of two words
u and
w is denoted by
. Klavžar and Milutinović defined the graph
with vertex set
, where
is an edge if and only if there exists
such that [
25].
Subsequently, the term Sierpiński graphs was named in [
26], and since then, they have been investigated from various perspectives. For example, Imran et al. investigated the topological properties of Sierpiński graphs [
27]. Gravier et al. investigated identifying codes, locating-dominating codes, and total-dominating codes in Sierpiński graphs [
28]. Hinz and Heide defined an algorithm that uses three automata and the fact that there are at most two internally vertex disjoint shortest paths between any two vertices to determine all shortest paths in Sierpiński graphs [
29]. Kalvzar et al. proved that for any
and
the Sierpiński graph
has a unique 1-perfect code if
t is even, and
has precisely
n 1-perfect codes if
t is odd. The concept of almost-extreme vertex of
was introduced by Klavzar and Zemljic in [
30] as a vertex that is either adjacent to an extreme vertex of
or is incident to an edge between two subgraphs of
isomorphic to
They also deduced explicit formulas for the distance in
between an arbitrary vertex and an almost-extreme vertex. Hinz and Parisse studied the eccentricity of an arbitrary vertex of Sierpiński graphs in [
31] where the main result gave an expression for the average eccentricity of
Rodríguez-Velázquez et al. obtained closed formulae for several parameters of generalized Sierpiński graphs
in terms of parameters of the base graph
especially, they studied the chromatic, vertex cover, clique, and domination numbers [
32]. Ishfaq et al. computed the Zagreb and forgotten invariants for extended Sierpiński graphs. They also obtained sharp bounds by applying different parameters [
33]. For more details and general background on Sierpiński graphs, refer to the comprehensive survey [
34].
This construction of Sierpiński graphs was generalized by Gravier et al. [
35] for any graph
by defining the
th generalized Sierpiński graph of
denoted by
as the graph with vertex set
and edge set defined as follows:
is an edge if and only if there exists
such that
Notice that if is an edge of there is an edge of G and a word w such that and In general, can be constructed recursively from G with the following process: and, for we copy n times and add the letter x at the beginning of each label of the vertices belonging to the copy of corresponding to Then, for every edge of add an edge between vertex and vertex Vertices of the form are called extreme vertices. For any graph G of order n and any integer has n extreme vertices and, if x has degree in then the extreme vertex of also has degree Moreover, the degrees of two vertices and which connect two copies of are equal to and respectively. Moreover, an extended Sierpiński graph is attained by introducing a vertex x in generalized Sierpiński graph and connecting with extreme vertices.
Topological descriptors are widely used in the field of chemical graph theory [
36,
37]. These descriptors can help estimate the physical and chemical properties of chemical networks. Over the past several decades, numerous topological invariants have been established and studied in chemical literature [
38,
39], and have been applied to understand various effects of organic materials that depend on their molecular shape. The first topological descriptor was introduced by the chemist Wiener in 1947 to determine the boiling points of paraffin [
40].
Gutman et al. in [
41,
42] defined the Zagreb indices:
Furtuala et al. [
43] proposed the forgotten topological index, defined as
In this paper, we consider simple and finite graphs. A graph consists of a set of nodes and the connections between them, called edges. The order and size of a graph are represented by the number of nodes and edges, respectively, and are usually denoted as p and q. The degree of a vertex v, denoted by , is the number of edges adjacent to the vertex. A graph is considered complete if a unique edge connects every pair of distinct vertices. We use , , and to represent star, cycle, and complete graphs of order p, respectively. In this paper, we will employ the notations and for the degrees of vertices x and y.
1.1. General Entropy of Graphs
Chen et al. [
44] introduced the concept of edge partition graph entropy in 2014. The edge partition graph entropy is measured in Equation (
4):
1.1.1. The First Zagreb Entropy
If
, then
The first Zagreb entropy is obtained by simplifying Equation (
4) as follows:
1.1.2. The Second Zagreb Entropy
If
, then
The second Zagreb entropy is obtained by simplifying Equation (
4) as follows:
1.1.3. The Forgotten Entropy
If
, then
The forgotten entropy is obtained by simplifying Equation (
4) as follows:
We apply the edge partition method to Sierpiński graphs, which has not been previously performed. We analyze the entropy of these fractal-type networks using first and second Zagreb and forgotten topological indices. We provide a comparative analysis of entropies calculated using Zagreb and forgotten indices by considering different base graphs (cycle, star, and complete graph) for the Sierpiński network. This comparison enables researchers to evaluate the performance of these indices in terms of their ability to capture the complexity and randomness of Sierpiński networks.