Next Article in Journal
How Industry 4.0 and Sensors Can Leverage Product Design: Opportunities and Challenges
Next Article in Special Issue
A PUF-Based Key Storage Scheme Using Fuzzy Vault
Previous Article in Journal
TriNymAuth: Triple Pseudonym Authentication Scheme for VANETs Based on Cuckoo Filter and Paillier Homomorphic Encryption
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review on Immune-Inspired Node Fault Detection in Wireless Sensor Networks with a Focus on the Danger Theory

by
Dominik Widhalm 
1,*,
Karl M. Goeschka 
1 and
Wolfgang Kastner 
2
1
Department Electronic Engineering, University of Applied Sciences Technikum Wien, 1200 Vienna, Austria
2
Automation Systems Group, Faculty of Informatics, TU Wien, 1040 Vienna, Austria
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(3), 1166; https://doi.org/10.3390/s23031166
Submission received: 20 December 2022 / Revised: 9 January 2023 / Accepted: 17 January 2023 / Published: 19 January 2023

Abstract

:
The use of fault detection and tolerance measures in wireless sensor networks is inevitable to ensure the reliability of the data sources. In this context, immune-inspired concepts offer suitable characteristics for developing lightweight fault detection systems, and previous works have shown promising results. In this article, we provide a literature review of immune-inspired fault detection approaches in sensor networks proposed in the last two decades. We discuss the unique properties of the human immune system and how the found approaches exploit them. With the information from the literature review extended with the findings of our previous works, we discuss the limitations of current approaches and consequent future research directions. We have found that immune-inspired techniques are well suited for lightweight fault detection, but there are still open questions concerning the effective and efficient use of those in sensor networks.

1. Introduction

Our information society is hungry for data. We gather and analyze an ever-increasing amount of data captured from an expanding number of sources. These data are essential for a plethora of data services that provide us with insights into existing processes or predictions for future events, both being used in industry and academia. Examples include process automation, precision agriculture, or research that leverages the available data for event detection, trend prediction, process analysis, or decision support. The data services heavily depend on the input data’s timely availability and fine-grained quality. Inaccurate or false data lead to erroneous information and can ultimately result in incorrect findings and/or wrong (counter-)actions.
In this context, wireless sensor network (WSNs) have become an essential source of fine-grained data about phenomena or events. Today, they are used in a wide range of services (cf. [1]). WSNs consist of wirelessly connected sensor nodes deployed in an area of interest to monitor physical quantities close to their source and, thus, provide data with a high level of detail. In most applications, the sensor nodes perform some (pre)processing of the measurements and forward the data to central services for further processing (i.e., cloud systems). In these central services, the data is eventually fed to statistical machine learning, or other methods as part of the data services to extract information.
However, during the data analysis, data instances may deviate from an expected or previously learned “normal” behavior. The sensor data can have outliers, show suspicious behavior in the form of offsets and/or drifts, or differ from the data reported by other sensor nodes in the same neighborhood. As such, anomalies can potentially reflect an event in the observed area, anomaly detection approaches are widely used to detect data events in WSN applications.
Nevertheless, not all anomalies are related to actual events in the monitored environment; they can also stem from faults in the data chain. Especially sensor node faults have been found to negatively impact the overall quality of the data reported by a WSN. Sensor nodes are dedicated embedded systems with strictly limited resources that often prevent the use of well-established fault-tolerance concepts, such as hardware and/or software redundancy. Most of all, the energy budget available on the sensor nodes is bounded since most nodes are battery-powered. They are expected to operate over long periods of time without the possibility of battery recharging or replacement. Additionally, sensor nodes usually consist of low-cost components that are prone to experiencing faults when being operated under unpredictable and uncontrollable conditions imposed by outdoor environments. Consequently, proper runtime measures are inevitable to ensure the correctness and accuracy of the data reported by the sensor nodes.
Over the years, numerous approaches and concepts that meet the requirements of sensor nodes have been proposed to tackle the problem of fault detection in WSNs. In this context, concepts inspired by the natural anomaly detection capabilities of the human immune system have gained broad interest from the research community. The immune system offers desirable properties for computing systems, such as its widely distributed and decentralized operation, its ability to perform temporal and spatial correlation, and its high detection rate with a low false alarm rate. Therefore, many researchers took inspiration from immune mechanisms when developing novel approaches for fault detection in resource-constrained systems such as WSNs.
In this article, we contribute with a review of immune-inspired fault detection approaches for WSNs proposed during the last two decades. Previous reviews are either outdated, focus on fault detection or immune-inspired techniques in separation, or target other areas of application than WSNs. Based on our review, we discuss the state of the art for immune-inspired detection approaches and highlight the beneficial properties of such mechanisms for fault detection on resource-limited sensor nodes. Nevertheless, the information from the literature review in combination with our findings from previous research (cf. [2]) revealed that current approaches entail several limitations and shortcomings. Consequently, we provide a discussion of these open problems and present future research directions necessary to cope with these.

1.1. Immune-Inspired Fault Detection

Sensor nodes are key components that significantly influence the sensor networks’ dependability, especially concerning the reliability of the data sources (cf. [3]). They need to operate in a reliable and energy-efficient way to ensure accurate data acquisition while operating unattended for long times. The prevalent use of low-cost components and the strictly limited resources in combination with the often harsh environmental conditions make sensor nodes prone to various types of faults. Therefore, fault detection or fault tolerance measures are crucial to ensure a reliable operation.
One branch of node fault detection schemes applicable to WSNs comprises immune-inspired approaches. Such approaches have shown promising capabilities and preliminary results for developing lightweight fault detection systems.
In [4], the authors claim that “… the process for characterizing a sensor network fault or anomaly is very similar to diagnosing an illness”. The authors of [4] draw the connection between the task of anomaly or fault detection in computing systems and the basic principles of immune systems. Similarly, the authors of [5,6] consider the normal behavior of a computer system to be free of anomalous occurrences, hence, “healthy”. Detecting unhealthy circumstances is precisely what the human immune system (HIS) achieves. In other words, anomaly detection systems and the HIS share the same goal, which is to keep the system stable despite a continuously changing environment [7]. Thus, applying immune-inspired techniques to detect deviations from a “normal” operation seems reasonable. Regarding the use in WSNs, the authors of [8] go even further and claim that the basic architecture of sensor networks has a high structural similarity to the biological cell structure.
For this reason, a new discipline arose in the early 1990s aiming at deriving computational models inspired by concepts from immunology, namely, the field of artificial immune systems (AISs) [9] (sometimes also referred to as computational immunology or immuno-computing). AISs are bio-inspired schemes leveraging the immune system’s characteristics for use in computational problem-solving [10,11]. The field of AISs encompasses a collection of algorithms that are models or abstractions of mechanisms observed in the HIS [12,13,14,15]. New insights into the functioning of the HIS and the processes involved in detecting infections and regulating the responses serve as inspiration for novel computational models.

1.2. Related Work

There are several reviews and surveys of immune-inspired approaches or fault detection techniques for WSNs available. Nevertheless, they are mostly outdated (published before 2010), do not consider WSNs or focus on either immune-inspired approaches (mainly for security applications), or fault diagnosis in separation. But we did not find any recent review that targets WSNs and includes immune-inspired fault detection approaches.
We found several older reviews for computer networks that utilize immune mechanisms for intrusion detection systems (IDSs) [13,16,17,18]. Their included approaches are neither (directly) applicable to WSNs nor do they consider fault detection. Additionally, as they were published before 2010, they do not include the latest findings and developments of immune-inspired computing.
Similarly, more recent reviews and surveys focus on either immune-inspired security applications, target other types of systems, or do not consider approaches for fault detection based on immune mechanisms. For example, the survey provided in [19] discusses immune-inspired anomaly detection in WSNs, but focuses on their application for network intrusion and attack detection only. The authors do not include immune approaches for fault detection or diagnosis.
On the other hand, the survey presented in [20] targets multi-robot systems that have different characteristics from WSNs. Although the goals of fault detection in both types of systems are similar, their specifics and requirements differ, and so do the approaches that are feasibly applicable in both areas.
Moreover, there are reviews of approaches for fault detection in WSNs published in recent years, such as the surveys in [21,22]. In both surveys, however, only techniques based on statistics, correlations, or simple self-checks are discussed, but they do not include immune-inspired approaches. In the same way, past reviews and surveys often did not consider immune-inspired approaches in their discussion (cf. [23]), although such techniques have been shown suitable for lightweight fault detection in WSNs.
Consequently, we provide a review of immune-inspired fault detection approaches in this article that targets WSNs in particular and covers the last two decades, including recent works in the field. In addition, we highlight current research gaps for effective and efficient fault detection revealed by the literature review and strengthened by our findings from previous works [2].

1.3. Article Outline

The remainder of this article is structured as follows. We first present a concise history of immunological theories in Section 2 followed by an elaboration on the unique properties of the immune system in Section 3. Next, we briefly introduce the danger theory in Section 4. The four classical AIS theories are then presented in Section 5. One of the most commonly used immune-inspired algorithms for fault detection is the dendritic cell algorithm (DCA), as discussed in Section 6. In Section 7, we provide our literature review of immune-inspired fault detection approaches applicable to WSNs, where we particularly consider concepts based on the danger theory (i.e., DCA-based approaches). Based on the findings of our review, we elaborate on the open problems of current approaches and possible research directions to solve them in Section 8. With Section 9, we conclude this article with a short recap of the main findings of our review on immune-inspired fault detection approaches tailored for WSNs.

2. History and Immunological Theories

Several works [24,25,26] define the beginning of immunology with the discovery of the basic principle of immunization and phagocytosis by Pasteur and Metchnikoff in 1870. In 1890, von Behring discovered the presence of antibodies in the body of mammals, followed by the detection of cell receptors by Ehrlich around 1900. Based on the work of von Behring and Ehrlich, Bordet and Landsteiner found in the 1930s that antibodies have a particular specificity. Thus, they only react to certain other types of cells. From there, it took another 20 years until La Verne and Burnet started to work on a theory on clonal selection of specific lymphocytes (B and T cells in particular) in the 1950s, on whose basis Burnet developed the clonal selection theory in 1957 (cf. [27]).
These fundamental discoveries led to the development of the so-called self/non-self (SNS) model (or “one-signal model”) in 1959 [28]. The name “self/non-self” refers to the basic process of B cells, which is distinguishes between entities that originate from their own system (“self”) and those that are foreign to the host (“non-self”). Similarly, the name “one-signal” model originates from the primary hypothesis that the immune reaction is triggered by recognizing non-self entities. As a result, only one signaling factor is required to trigger an immune response (see Figure 1a).
Soon, the SNS model was challenged by Oudin et al. [29] with questions that the original model could not answer. As a consequence, Bretscher and Cohn proposed in 1969 their associative recognition theory, sometimes referred to as the “two-signal model” [30]. In their model, antigen recognition alone is insufficient to trigger an immune response. It requires a second “signal”, which they named help signal as shown in Figure 1b. This help signal is necessary to trigger the B cells. If only Signal 1 (antigen recognition) is present without the secondary help signal, the B cell simply dies.
Meanwhile, Jerne was working on another aspect of the HIS, which he published in 1973 as his idiotypic network theory [31]. Jerne focused on the interaction of the particular parts of the immune system and suggested that the immune system consists of complementary idiotypes and paratopes that coexist and form some kind of a formal network. The idiotypes and paratopes act as stimulatory or suppressive factors in this network. Usually, these factors are balanced. In case the stimulatory parts become rife, an immune response is triggered. For a long time, the idiotypic network theory was seen as a competitive model to Cohn and Bretscher’s associative recognition theory. However, today, Jerne’s propositions on the regulation of the HIS by such an idiotypic network is considered complementary to the prevalent models of immune response activation [32].
Lafferty and Cunningham further refined and extended the two-signal model in 1975 [33]. As depicted in Figure 1c, they claimed that the T helper cells themselves need to be co-stimulated by antigen-presenting cells (APCs) (e.g., dendritic cells (DCs)) to provide the help signal to the B cells. If the T helper cell recognizes Signal 1 (antigen recognition) but receives no co-stimulation from an APC, it dies. Consequently, it does not relay the co-stimulation as a help signal to the B cell, causing this to die, too. Thus, the presence of two signaling factors in conjunction is needed to trigger an immune response by activating the B cells:
  • Antigen recognition (i.e., the affinity between T cell receptors and certain antigens);
  • Co-stimulation by T helper cells.
The extended two-signal model by Lafferty and Cunningham served as a sound basis for the functioning of the HIS and stayed untouched for quite some time. Later, new observations on how vaccines worked led to questions not answerable by the model. In particular, it was found that adjuvants were needed in combination with vaccines to stimulate immune responses. It was Janeway in 1989 who presented a new refined model of the immune system, the infectious non-self (INS) model (cf. [34]), as shown in Figure 1d. This model suggests that the APCs themselves need to be activated before being able to provide the co-stimulation signal. For this reason, the APCs have their own form of SNS discrimination that is based on the detection of conserved pathogen associated molecular patterns (PAMP) (essentially exogenous signals) through pattern recognition receptors (PRRs).
For a long time, it was believed that the critical element in activating immune responses is antigen recognition, that is, the discrimination of entities that originated from their own system (“self”) from those that are foreign (“non-self”). This “self/non-self” view became increasingly challenged by observations that the model could not explain, for example, transplants (no attack against “non-self”) as well as tumors or autoimmunity (both attacks of “self”). Another prominent example is the absence of immune responses to foreign bacteria in the gut or the food we eat [26]. As a consequence, the model of the HIS was continuously refined to be able to explain new findings. However, the core mechanisms remained the discrimination between self and non-self.
This view was significantly changed when Polly Matzinger presented her “danger theory” in 1994 (cf. [25,35]). According to this theory, the immune system reacts to entities causing damage rather than those considered foreign. Unlike the INS model, the danger theory builds upon the suggestion that DCs are natural information fusion entities able to combine cellular signals from both endogenous and exogenous sources [25,36]. The cellular signals are further distinguished based on their origin. As highlighted in Figure 1e, there are signals from distressed or injured cells (necrotic signals) that imply danger. In contrast, cellular signals from cells that died naturally (apoptotic signals) present a somewhat safe situation [7].
However, even today, immunologists are not fully sure how the immune system works in its entirety and which entities and processes are actually involved. So far, the INS model and the danger theory are two of the most hotly debated theories, and their basic principles are accepted by the majority of immunologists [37]. Still, the danger theory implies some problems such as those of previous immune models. Similar to the question of how to discriminate self from non-self of the SNS model [14], the danger theory faces the difficulty of how to distinguish between danger and non-danger [26].

3. Unique Properties of the Immune System

The human body is unquestionably one of the most complex systems known to humanity. There are three main regulation systems in the human body:
  • The nervous system.
  • The endocrine system.
  • The immune system.
These three systems are integrated into one ultimate information communication network within the human body [38]. However, each regulation system has its specific roles and unique properties. Understanding these unique properties is necessary for building effective and efficient computational models based on mechanisms and processes observed in natural systems.
In the following, we will first provide a brief overview of these three regulation systems in Section 3.1. Then, the multi-layer defense mechanism of the HIS is presented in Section 3.2. Finally, the role of leukocytes and, in particular, the lymphocytes is discussed in Section 3.3.

3.1. Nervous, Endocrine-, and Immune System

The nervous system is a highly ramified network with a hierarchical order controlled by a central controller (the brain). Information is transported via electrical impulses that can be amplified or blocked by messengers. The nervous system, and particularly, the brain have been used as inspiration for computer scientists for a long time (e.g., in artificial neural networks (ANNs)).
On the other hand, the endocrine system is a regulation system purely based on chemical messengers (i.e., hormones; cf. [39]). These chemical messengers are secreted by different source organs (called glands) in the human body. The regulation itself happens with specific feedback loops of the hormones as almost every hormone has a complementary hormone [40]. The endocrine system tries to establish homeostasis (or feedback inhibition) between the chemical messengers by regulating the secretion of the respective complementary hormones. The endocrine system has some interesting properties [41], such as self-organization, synchronization, and cascading effects, that offer inspiration for certain computational problems.
In [41], Sinha and Chaczko compared the basic structure and working principle of the endocrine system with large-scale Internet of Things (IoT) infrastructures. Based on this view, they argue that models derived from the endocrine system offer great potential to solve problems prevalent in such large-scale networks. For this reason, several computational models based on the endocrine system have been proposed in the past, such as the autonomous decentralized system [42,43,44], the digital hormone system used for self-organized robot swarms [45,46,47], the computational model of hormones as first proposed in [48] and extended in [49], the regulation model of hormones [50,51], as well as the artificial hormone system [52,53].
The third regulation system, the immune system, is a widely distributed and inherently parallel network of a significant number of diverse entities. These entities work simultaneously and in cooperation with each other to reach the overall goal, to keep the body healthy [54,55]. It is a decentralized system without a central controlling instance (such as the brain for the nervous system). One of the most significant advantages of the HIS is its vast amount of resources. The immune system of an adult consists of around 10 12 lymphocytes, 10 20 soluble antibody molecules with about 5 million different antibody types, and a daily turn-over of these components of approximately 2% (cf. [24]).
Additionally, the HIS operates on different levels using various components, such as physical barriers (e.g., skin), chemical barriers (e.g., antimicrobial substances such as sweat and saliva), cellular proteins (e.g., cytokines), and a large number of different cells (e.g., macrophages and DCs). All these components and their interactions build up a highly complex self-organizing system with beneficial properties, such as error tolerance, adaptation, and self-monitoring [56]. Certain parts of the immune system even have learning, memory, and associative capabilities (cf. [57]).

3.2. Innate and Adaptive Immunity

The immune system has an ingenious multi-layer defense mechanism consisting of two distinct yet interrelated immune mechanisms [58]:
  • Innate (non-specific) immunity.
  • Adaptive (specific) immunity.
The combination and interaction of both form versatile and efficient protection for the human body. Both parts of the immune system use many different cells of diverse specialization to protect the host efficiently.

3.2.1. Innate Immunity

The innate immune system [59] provides non-specific protection and defense mechanisms, as well as general immune responses. There are four types of defense barriers in innate immunity, namely (cf. [58]):
  • Anatomic barriers;
  • Physiologic barriers;
  • Endocytic and phagocytic barriers;
  • Inflammatory barriers.
Innate immunity consists of a large number of different cells providing a defense against the general properties of pathogens [60]. Hereby, the APCs (a kind of leukocyte, or more specifically monocyte) play an important role, especially the DCs (see Section 4.3). The innate immune system is an essential first line of defense against invading pathogens using generic responses [61]. The innate immune system does not develop memory and, thus, does not offer specific responses [62]. A review of innate immunity and its biological principles and properties can be found in [63].

3.2.2. Adaptive Immunity

The adaptive immune system [64] provides more specific and compelling response mechanisms, as well as the capability to learn from previous occurrences of pathogens (i.e., immune memory [65]). It is sometimes called acquired immunity as the specific responses are developed over the lifetime of the host [56]. The main components of the adaptive immune system are lymphocytes, in particular B and T cells. In contrast to the leukocytes constituting the innate immunity, these cells can evolve over the lifetime of the host by specializing their receptors [66]. Based on these cells and their contribution to adaptive immunity, two primary adaptive immune responses can be distinguished, the humoral response and the cellular response [62,67,68].
The humoral response, or humoral immunity, refers to the interaction of B cells with antigens by producing specific antibodies that detect and eliminate foreign entities. B cells are produced by the bone marrow, where they have to survive a negative selection process before being released into the bloodstream. This negative selection process is part of the SNS theory and makes sure that the B cells surviving are self-tolerant; thus, they do not attack native (self) cells. If a B cell matches a particular antigen, it responds by multiplying itself through clonal expansion. In this process, B cells divide into several clones with slightly mutated antibodies to cover a broader spectrum of antigens and increase the chance of an even better antigen matching [69]. B cells with a high affinity can evolve to memory B cells capable of identifying the same pathogen much faster in the future (as the activation and stimulation process is shorter for memory B cells [70]). Such an immune response from memory B cells is called immune memory (also referred to as secondary immune response or strong immunity [65]) and provides an essential characteristic of the adaptive immune system, namely the ability to learn through interaction with the environment. Approximately 90% of the B cells die after their responses or lifespans, and the rest remain as memory cells [71].
The second adaptive immune response is the cellular response. It refers to the behavior of T cells that have two main tasks:
  • The detection of intrusions by T helper cells (Th).
  • The attraction of cytotoxic T cells (Tc) for the disposal of infected cells [68].
To be more precise, the Tc becomes activated on the recognition of infected cells and starts producing molecules that destroy the infected cell.
In addition to Th and Tc, there exists a third type of T cells, the regulatory T cells. These regulatory T cells exist in two different stages: naive or active [71]. After being produced in the bone marrow, these regulatory T cells migrate to the thymus where they undergo a negative/positive selection process similar to B cells (but in the thymus instead of the bone marrow). Regulatory T cells that survived the selection process and that have not experienced an antigen yet are called naive T cells. Naive T cells can become activated T cells if they successfully bind to an antigen in combination with co-stimulation from an APC (or DC to be precise; see immune models in Section 2). Thereby, the degree of activation depends on the degree of signaling from the DC. In the case of excessive levels of co-stimulation, the T cells die to prevent overly excessive immune responses, a process called activation-induced cell death [72].

3.3. White Blood Cells

Although many cells are involved in immunity, white blood cells build the core of the immune system. These cells are primarily produced and matured in lymphoid organs (e.g., thymus or bone marrow). As depicted in Figure 2, they are categorized in general white blood cells,  leukocytes, and specific subtypes of white blood cells,  lymphocytes [9].
While leukocytes form the basis of innate immunity (i.e., monocytes such as macrophages and APC), the adaptive immune responses are primarily performed by lymphocytes (i.e., T and B cells, as well as natural killer cells) [25]. The three most important white blood cells for immunity are:
  • dendritic cell (DCs) are a particular class of APCs that moves in blood and processes information about antigens and dead cells found in their way.
  • T cells are produced by the bone marrow and are responsible for destroying infectious cells.
  • B cells are also produced by the bone marrow and stimulate the production of antibodies.
Due to their way of detecting foreign antigens, the antibodies are often called detectors, especially in the context of AIS.
In addition to the white blood cells, a large number of other cells and molecules are essential for the functioning of the immune system. Thereby, the ligands (or keys) play an important role as they are responsible for activating the cells’ receptors. As with the endocrine system, also the immune system contains regulating molecules called cytokines. Additionally, chemokines are specialized molecules that stimulate cell movement [9].
Altogether, the immune system shows characteristics also found in other bio-inspired systems. As the cells and their interaction share similar properties with swarm-like systems, the immune system is often considered a swarm system, too [74]. Also, to detect foreign entities, the immune system uses affinity measures that are, in their fundamental principle, similar to the fitness function in genetic algorithm (GA) [67]. A detailed overview of the (natural) immune system can be found in [58,75].

4. The Danger Theory

The danger theory states that the immune system does not primarily react to foreignness but to circumstances that pose a danger to the host. Therefore, it changes the discrimination of “self from non-self” of the SNS model to a discrimination of “some self from some non-self” depending on the presence of danger to the system (see Section 4.1). This difference in the antigen discrimination is shown in Figure 3, where SNS refers to the self/non-self model, INS to the infectious non-self model, and DT to the danger theory. In the figure, a “+” states that the theory reacts to this kind of antigens, while a “–” means that the theory ignores antigens of that kind.
In the danger theory, the danger is represented by the presence of so-called danger signals in the absence of down-regulating safe signals within a specific area (refer to Section 4.2). These necrotic (danger) signals and apoptotic (safe) signals in combination with PAMP are integrated by the dendritic cells to instruct the immune system to respond appropriately. Thus, the dendritic cells are major control mechanisms in immune systems (cf. Section 4.3).

4.1. Basic Concept

While previous immune models often focused on the role of adaptive immunity, Matzinger also stressed the importance of innate immunity [25,35]. From a biological point of view, the innate immune system has three main roles [57]:
  • Defending the host in the early stages of infection.
  • Initiating adaptive immune responses.
  • Determining the actual type of adaptive response through APCs (i.e., DCs).
In the danger theory, signal two is provided by “professional” APCs, the DCs, which provide a vital link between innate and adaptive immunity [37]. Due to their way of collecting and evaluating the information on the current condition of the host, these DCs are sometimes denoted as the crime-scene investigators of the HIS [76]. Therefore, the danger theory suggests that there are two key elements responsible for immunity:
  • The tissue with the signals contained.
  • The alignment of innate and adaptive immunity by DCs.
The signals are discussed in more detail in Section 4.2.
As a result, the danger theory further implies a notable change regarding the control of immune responses. It highlights the role of the tissue for the immune system as it suggests that it is the tissue that controls immune responses and the evolution of the immune system [37,77].
The danger theory was initially hotly discussed within the immunology community and not accepted by all members [78]. However, Matzinger and other advocates of the danger theory found more and more evidence for their claims, as well as observations in nature that can not be explained by the previously prevalent theories. The danger theory states that the “foreignness” of a pathogen alone is not enough to trigger an immune response and that, on the other hand, “selfness” is no ultimate guarantee of tolerance [25]. As shown by Matzinger in [35,79], changes do happen in the human body over lifetime, be it of natural cause (e.g., pregnancy) or due to external intervention (e.g., surgeries); thus, the self changes as well. More detailed information on the danger theory from an immunologist’s view can be found in [25].

4.2. Immunological Signals

The danger theory states that the affinity between an antigen and an antibody (“signal one”) is not enough to trigger an immune response [58,80]. In addition, there needs to be a co-stimulation by APCs, such as the DCs (“signal two”; see Figure 1). DCs reside in the tissue and collect antigenic material and contextual information (commonly termed signals). According to the danger theory, it is the correlation of the contextual information (i.e., the signals) that triggers immune responses. Matzinger [35] groups these signals into three main categories (see also [66]):
  • Apoptosis: natural death of cells (the “safe signals”).
  • Necrosis: unnatural death of cells (the “danger signals”).
  • PAMP: biological signatures of potential intrusions (e.g., foreign bacteria).
The danger signals can be further divided into endogenous (generated by the body, such as heat shock proteins, nucleotides, neuromediators, and cytokines) and exogenous (caused by invading organisms) [81]. These necrotic (danger) signals and apoptotic (safe) signals in combination with PAMP signals are integrated by the DCs to instruct the immune system to respond appropriately [66]. For more information on necrosis, apoptosis, and their processes and characteristics, we refer an interested reader to [82].
However, Matzinger admits that the exact nature of the danger signals is unclear, resulting in the difficulty of discriminating danger from non-danger [26]. Since the advent of the danger theory in 1994, many signals affecting the DCs have been empirically revealed [81]. As argued by Aickelin and Cayzer [26], a connection to the classical SNS theory is to consider the presence of non-self as a kind of danger signal.

4.3. The Role of Dendritic Cells

The danger theory focuses on the DCs since they can stimulate naive T cells, and thus, initiate primary immune responses [81]. DCs are monocytes (i.e., white blood cells) that were initially identified by Steinman and Cohn [83] and are native to the innate immune system [36]. Due to their function, they can be seen as the body’s own intrusion detection agents [84]. DCs provide a vital link between the innate and the adaptive immune system as they link the initial detection (innate) to the actual effector response (adaptive) [37]. Additionally, DCs are one of the major control mechanisms in immune systems as they coordinate the T cell responses by producing certain pro- or anti-inflammatory cytokines (chemical messengers). Pro-inflammatory cytokines have an activating effect on immune responses, while anti-inflammatory cytokines have a suppressing effect.
DCs are produced by the bone marrow and exist in three states of maturity with different functions, respectively [63,85]. After being produced, the DCs are in an immature state (denoted as iDC). The iDCs reside in the tissue and have the primary task of collecting cellular debris via ingestion [37,86]. Thereby, they collect antigens and receive the signals mentioned above (i.e., danger, safe, and PAMP). After being exposed to a certain quantity of signals, the iDC becomes activated. Exposure to PAMPs accelerates the process of maturation.
The activated iDC then migrates from the tissue to the lymph nodes where they either become semi-mature (smDC) or mature (mDC). In case the iDC experiences a higher concentration of danger-related signals (i.e., a greater quantity of either PAMP or danger signals), it maturates into an mDC. Otherwise, it becomes an smDC.
In the lymphoid tissues, the smDC and mDC interact with naive T and B cells to either initiate (in the case of mDC) or suppress (in the case of smDC) an adaptive immune response. The naive T cells respond by differentiating further into activated T cells (see Section 3.2.2, as well as [71]). This is achieved by the production of small quantities of anti-inflammatory cytokines by the smDC and the production of pro-inflammatory cytokines by the mDC, respectively. In addition, the mDC produces co-stimulatory molecules that have an amplifying effect on both the PAMPs and danger signals in the surrounding area [87].
However, also iDC have a suppressing effect as the encounter of iDC with T cells results in the deactivation of the T cell due to a lack of co-stimulatory molecules or inflammatory cytokines [86]. The DCs do not perform their function in isolation as there are numerous of these DCs in the tissue. Thus, they form a population-based system offering high error tolerance and robustness through diversity, as well as a low false alarm rate (FAR) [84]. Further information on the DCs functioning with a focus on AIS is available in [88].

4.4. Impact on Computational Problems

The advances in immunology research made over the last century not only help us to understand the working principle of the HIS and derive appropriate therapies in medicine, but they also provide a great source of inspiration for other scientific disciplines. In the area of computer science and engineering, findings on biological processes have often served as inspiration for novel techniques to solve computational problems (i.e., bio-inspired computing [89]).
Concerning fault detection techniques, especially the findings of Matzinger’s danger theory, have led to a paradigm shift on how to detect faults in a system. While past approaches mostly followed the negative/positive-selection approach to distinguish between normal and faulty system states, more recent approaches are increasingly inspired by the danger theory. In contrast to previous immune theories, the danger theory states that the immune system aims at identifying circumstances that pose a danger to the host rather than identifying everything foreign. In doing so, the HIS involves a multitude of diverse immune entities (i.e., cells) to form a self-organized, distributed, and cooperative defense mechanism where the single cells perform merely simple tasks.
As a result, the processes described by the danger theory serve as a good inspiration for fault detection systems, especially those with limited resources (cf. [2]). This trend is also visible in the literature review provided in Section 7. The reason is that danger-theory-based approaches can be realized in a highly distributed fashion where the single entities require only simple processing and, thus, are suitable for resource-limited systems. But most importantly, the population-based fault assessment offers smoothing and noise-reduction capabilities that significantly help to lower the false alarm rate, a problem that was prevalent in many previous techniques. Additionally, the basic detection principle of the danger theory, that is, the focus on situations that pose danger to the host, helps to improve the reliability of the system. It concentrates on those faults that endanger the systems’ purpose instead of targeting all possible faults equally.

5. Classical AIS Theories and Their Applications

Since the first AIS emerged in the 1990s, much research has been conducted in the field. With growing research interest, the field of AIS became more comprehensive and the areas of application more numerous. Generally, research on AISs can be grouped into three main areas (cf. [90]):
  • Immune modeling.
  • Theoretical AISs.
  • Applied AISs.
Immune modeling is concerned with the biological processes of the HIS and is predominantly covered by immunologists or biologists. Theoretical AISs take inspiration from immune models to develop computational models capable of solving defined problems on a theoretical model. In this context, the mapping from immunological to computational entities remains a problematic task [91]. However, especially applied AISs have gained popularity over the last years as an increasing number of use cases and real-world scenarios arose where AIS can be efficiently applied to solve computational problems [92].
However, over the last two decades, the AIS models have notably evolved. While in the beginning most AISs mimicked adaptive immune response mechanisms only, today more models incorporate processes of both innate and adaptive immunity. For this reason, models including only adaptive immunity are usually referred to as first generation AIS and those that include both are denoted as second generation AIS [9]. One primary reason for this paradigm shift was the findings on the data fusion capabilities of DCs. Including DCs into an AISs allows the system to correlate data from multiple noisy sensors that help to improve the overall stability of the AIS, especially in the presence of unknown time delays of the signals [9].
AIS have characteristics that make them suitable for optimization or anomaly detection tasks, especially their ability of self-adapting, self-learning, self-organizing, highly parallel processing, and their distributed coordination [7]. Their efficiency can be further improved by auxiliary antigen libraries or concepts from the idiotypic network theory (see Section 5.3). AISs also provide mechanisms for self-regulation by adjusting the lifetime of the cells used and their probability of reproduction [93]. By fine-tuning these parameters, the performance of AIS can be significantly improved [91]. Additionally, these regulatory mechanisms allow the system to adapt to dynamic environments, which in turn is vital as the human body undergoes specific changes over its lifetime [26], which can also be the case for WSNs.
AISs have been shown to perform comparably well on certain benchmark data sets when compared to existing statistical and machine-learning techniques [9]. In some cases, they even presented a more efficient solution than prevalent techniques. Nevertheless, many AIS models have some significant drawbacks that limit their applicability. The most severe ones are their usually high resource consumption (especially for memory) and their ordinarily bad scaling properties [94]. As an example, the authors of [95] compared an AIS-based misbehavior detection with a second instance based on an ANN. They showed that the AIS offers comparable results, in some cases even better than the ANN, but at the cost of resources, especially memory. In their experiments, the AIS-based approach required nearly six times more memory than the ANN approach.
Today, most AIS approaches are derived from one of the following four theories, sometimes called “classical AIS theories” [54]:
  • Negative/positive selection (mainly based on T cells; see Section 5.1)
  • Clonal selection (mainly based on B cells; see Section 5.2)
  • Immune network theories (i.e., idiotypic network theory; see Section 5.3)
  • Danger theory (i.e., dendritic cell-based algorithms; see Section 5.4)
Aside from these common techniques, several other immunology-inspired algorithms and computational tools have been developed, such as humoral immune response systems [62] and the pattern recognition receptor model [96]. Review work for general AIS approaches is given in [17,54,97,98,99] as well as focused on anomaly detection and IDSs in [13,16,18].
AIS can also be combined with other (learning) techniques to build more efficient ensemble/hybrid systems. One common goal is to decrease the FAR, which is usually high in self-organized (unsupervised) approaches. A typical example are immune genetic algorithms [100,101,102] for optimization problems as well as to lower the FAR of an immunity-based anomaly detection system [103]. A more sophisticated approach was proposed in [104]. This model consists of three evolutionary stages to optimize the overall performance:
  • Gene library evolution [65]
  • Negative selection [105]
  • Clonal selection [106]
For more examples on ensemble/hybrid AIS, we refer to the survey on AIS hybrids presented in [107].

5.1. Negative and Positive Selection

In the HIS, negative selection is a process taking place in the bone marrow (for B cells) or the thymus (for T cells). It uses self/non-self discrimination based on a naive model of central tolerance developed in the 1950s [71] and, together with clonal selection, forms the core concepts of the SNS model. The SNS model assumes that the self is defined in early life, and anything that comes later is considered as non-self [25]. The selection process aims at eliminating antibodies (i.e., lymphocytes) that are reactive to entities of the self space. For this purpose, it checks their affinity based on the degree of binding between, for example, T cell receptors and specific antigens. The antibodies failing the selection process are removed from the population.
There are two basic selection processes, namely positive and negative selection. In positive selection, the antibodies are selected to cover the self-space. Thus, only those who match the self are kept while the others are removed. On the other hand, in negative selection, the antibodies are selected to match the non-self space. Nevertheless, positive selection has not been found in the selection of T cells [108]. As a result, the majority of immune-inspired approaches use negative selection. However, which of these two selection processes better suits a given task depends on the size of self and non-self, or their ratio, respectively.
Methods based on negative/positive selection are typically used for classification and pattern recognition problems (e.g., anomaly detection [109]). In anomaly-based IDS, the pathogens represent the potential attacks, and the antibodies are a way to identify that attacks [110].
Inspired by the HIS’ negative selection processes, the negative selection algorithm (NSA) was proposed in 1994 in [105]. The crucial part of the NSA is to find a suitable mapping from the biological entities (e.g., antigens, antibodies, pathogens) to the computational problem. In the area of AISs, the antibodies are usually called detectors as their job is to detect certain circumstances (i.e., the presence of non-self). The detectors are often represented as feature vectors representing antigenic patterns able to detect changes in behavior [111]. Often the problem space is represented by an n-dimensional space, and the detectors are hyperspheres that use a matching rule based on an individual membership or distance function (e.g., Euclidean distance). In some NSA-based approaches, immune memory is introduced by promoting detectors that produce many alarms to memory cells with a lower activation threshold [13].
The NSA has two important components: the detectors and the matching rule. The problem of how to generate detectors to minimize their number while maximizing the covering of the non-self space is one of the major fields of research for NSA [112,113]. Usually, the number of detectors required to cover a certain self-space grows exponentially with its size [11]. Also, the shape of the self-space and the detectors has shown to have a significant impact on the number of detectors needed [95].
Related work on the improvement of the detectors focuses on their representation (e.g., binary or real-valued; see [114]), their shape (e.g., hyperspheres or hyperellipsoids; see [115]), the parameters involved in their creation [116], the influence of variable radius [117] as well as the effects of growing or shrinking the detectors surface [118]. An extensive analysis of the effects of different detectors used in NSA as well as the development of improved detector generation algorithms is summarized in [119]. Another way to efficiently cover the entire non-self space is to combine detectors of different types (with their respective matching rules) to reduce the number of holes [11].
Directly intertwined with detectors are the affinity measures (or matching rules) applied. As presented in [54], the metric to measure the affinity (similarity) depends on the choice of vector attributes as it determines the detectors’ shape space type. In [120], different detector shape spaces and suitable affinity metrics are analyzed, such as real-valued shape spaces (with Euclidean distance or Manhattan distance), Hamming shape spaces (with Hamming distance or r-continuous bit rule), and symbolic shape spaces. Also, alternative representations have been proposed such as feature-feature relations [121], or dictionary-based basis decomposition methods [122]. However, choosing an expressive metric is a non-trivial task in most cases.
As stated in [123], most works so far used an antigen representation based on binary feature vectors and applied binary matching rules (e.g., r-contiguous matching [105], r-chunk matching [114], landscape-affinity matching [124], or Hamming distance matching rules [124,125] and its variations such as Rogers and Tanimoto (R&T) matching rule [124]). Especially the r-contiguous matching rule has found application in many NSA-based approaches [11,105,114]. The r-contiguous rule matches two strings if they have an identical sequence of r bits.
Although approaches based on negative selection had a promising start, they have been found to have severe problems regarding scalability and coverage [14,16]. As pointed out in [126], the required amount of detectors to sufficiently cover the non-self space becomes unmanageable for most problems. The authors of [119] counter this claim and argue that the problem is not with the algorithm itself, but with unsuitable (binary) representations of the problem space (see also [127]).
In addition, there are two common problems with the traditional SNS model applied to AIS, that are a high false positive rate (FPR) when using negative selection (leading to missed anomalies) and a high false negative rate (FNR) when applying positive selection (resulting in a high FAR; cf. [26]). Directly connected with these issues is the problem of a dynamic or changing self as the SNS model assumes a static self that does not change over the lifetime. One way to cope with changing selves is the balance the life cycle of immune cells, enabling an adaptive coverage of the non-self space [65].
Possible solutions to these problems are hybrid approaches. One way to overcome the difficulties with detector coverage is to apply evolutionary algorithms to continuously evolve the detectors, such as GAs [128] or clonal optimization [129]. A prominent example is the evolutionary negative selection algorithm, a hybrid evolutionary immune algorithm that was extended with a niching technique to prevent the algorithm from ending up in a local optima [104]. Also, the usage of gene libraries to avoid random detectors at the initialization is a promising way [13]. These gene libraries lead the generation process of antibodies and can improve the overall efficiency [130].
Another approach dealing with the problem of crisp transitions between the self and non-self space is the combination of negative selection with fuzzy rules [131,132]. Such fuzzy-based NSA have shown favorable characteristics when applied to immunity-based IDS [131]. For a network IDS, also the efficiency of a hybrid AIS combining positive and negative selection has been analyzed [133,134]. Fuzzy rules in combination with Q-learning were used in the cooperative fuzzy artificial immune system proposed in [135] that showed superior properties in comparison with other learning techniques (i.e., C4.5 decision tree, artificial immune recognition system (AIRS), clonal selection algorithm (CLONALG), fuzzy logic controller, and fuzzy Q-learning).
Two of the more complex hybrid approaches are Bayesian artificial immune systems [136,137] and the complex artificial immune system [138]. The former is based on Bayesian networks and is intended for solving hard optimization problems. On the other hand, the complex artificial immune system is a layered model that takes antigens as inputs and proposes antibodies as output. It is best suited for pattern detection problems as it can deal with several transformations such as scaling or rotation of patterns.
However, NSAs have been applied to many problems so far, including anomaly detection [139], fault detection [140], or function optimization [141]. An approach to apply negative selection to an active defense IDS is presented in [142]. Similarly, an immunity-based IDS with a multi-agent architecture is shown in [143]. A survey on NSA applications can be found in [144].

5.2. Clonal Selection

Clonal selection theory [28,106] is based on the functions of lymphocytes in immune systems, especially the maturation phase of B cells. The foundation of this theory was introduced by Burnet in 1957 as an explanation for the observed diversity of antibodies during an immune response [27]. The clonal selection theory suggests that lymphocytes activated by antigen-binding trigger a clonal expansion to evolve antibodies with a better affinity to the present antigens. During this clonal expansion, the lymphocytes undergo an affinity maturation where they are subject to somatic hypermutation (a mutation of the cell’s antigen-binding coding sequences) and a subsequent selection mechanism [90]. In hypermutation, the degree of mutation depends on the affinity measure, where a lousy affinity value results in a higher degree of mutation. As a consequence, the generality and coverage of the detection are increased through the process of hypermutation [13].
The clonal selection and the algorithms derived from it, such as the CLONALG [145], are commonly applied to optimization problems and clustering problems (such as pattern recognition) [17]. Additionally, it is often used in conjunction with NSA or an affinity calculator [65]. As the task of affinity evaluation can be partitioned, a parallel version of CLONALG was proposed in [146].
As summarized in [147], the original CLONALG has a relatively high FAR and is not able to cope with dynamic environments. It is impracticable for dense environments making it not suitable for WSN applications. But it can be deployed in a highly distributed manner and offers an efficient detection rate. It allows the development of memory detectors that help to reduce the response time, especially when combined with negative selection. For this reason, an improvement of the original CLONALG was introduced in [148]. Another algorithm based on the CLONALG with influences from artificial immune network (AIN) [149] (see Section 5.3) is the artificial immune recognition system (AIRS) [150,151], one of the first AIS-based supervised learning algorithms. In [146], a version of AIRS is presented in which the affinity evaluation is parallelized.
Although clonal selection approaches rather deal with optimization problems, several attempts of applying it to an anomaly or intrusion detection have been proposed (cf. [152]).

5.3. Artificial Immune Networks

Artificial immune networks (AINs) are a class of immune-inspired algorithms that are based on the idiotypic network theory proposed by Jerne [31]. They can be seen as an extension of the clonal selection with the interaction between the antibodies and antigens, or B cells respectively [9]. The AIN model was first proposed in [125] followed by the first AIN algorithm in [153] and an improved version in [154]. Today, one of the most common AIN-based algorithms is aiNet [155] and its variations [156].
Similar to the clonal selection, AIN-based concepts are usually used for optimization and clustering problems as well as data visualization and control where they share properties with ANN [157].

5.4. Danger-Theory-Based Approaches

The unique role of APCs and especially the DCs for (innate) immunity is well known since Lafferty and Cunningham’s extended two-signal model from 1975 [33]. DCs are one of the most important immune response regulation mechanisms. Their importance for the immune system became even more evident with the advent of the danger theory in 1994 [35]. Since then, several computational approaches based on the danger theory, or the DCs’ functionality in general have been proposed.
A first in-depth discussion on the potential of the danger theory for AISs was presented in [56]. The authors stressed on the natural anomaly detection capabilities of DCs and their possible applications in computing systems. Thereby, especially a low FPR in combination with a high true positive rate (TPR) are desirable properties for anomaly detection techniques [66]. The anomaly detection is performed by the DCs by correlating the collected antigens with the fused contextual signals. It is necessary to consider the signals in combination as the analysis of particular signals in isolation is insufficient to indicate anomalies [158] or to produce classification [12]. Additionally, the danger theory provides a way of grounding the response by linking it directly to the source for abnormality [56].
danger-theory-based approaches have shown good anomaly detection capabilities while using minimal resources [56]. In contrast to other immune-inspired techniques, the danger theory bases its detection on the presence of danger to the host, represented by so-called danger signals, in combination with an absence of down-regulating safe signals [84]. Thus, danger-theory-based approaches use pre-defined signals to derive the system’s context and react to “dangerous” states rather than all kinds of deviations. These signals are collected over time and in different places allowing the system to leverage spatio-temporal correlation.
Over the years, the danger theory has inspired the development of several AISs. Especially the unique role of the DCs has paved the way for several novel algorithms such as the toll-like receptors (TLR) algorithm [159] and the conserved self pattern recognition algorithm (CSPRA) [96].
The TLR algorithm [159] models the interaction of DC and T cell populations. It uses binary signals (i.e., present and not-present) to stimulate immune responses in a way similar to PAMP signals. For more information on the TLR algorithm and the detailed steps involved, see [160].
Another AIS model influenced by the danger theory is the CSPRA [96]. It allows detecting anomalies by replicating the negative selection of T cells in combination with the self-pattern recognition of APCs. It adds the APCs part of the function as the negative selection is naturally involved from the PRR model.
Nevertheless, the most common danger theory-inspired algorithm is the dendritic cell algorithm (DCA) [12,37,66,76,84,88,110,158,161,162] originally proposed in [37] as part of the so-called “Danger Project” [56]. The DCA is suitable for use in resource-constrained systems and can perform context-aware anomaly detection. Both are properties desirable for fault detection approaches in WSNs. For this reason, an introduction to the DCA, its working principle, and its further developments are presented in Section 6.

6. The Dendritic Cell Algorithm

The dendritic cell algorithm (DCA) was one of the first algorithms that used the functioning of dendritic cells as suggested by the danger theory for solving computational problems. Its initial version (also called “classical DCA”) was introduced by Julie Greensmith in 2005 [37]. The DCA is based on the DCs’s ability to combine multiple signals to assess the current context of their environment. In contrast to other AISs, it relies on the correlation of information from the population of DCs rather than pattern-matching based on similarity metrics [66]. Further differences to other AIS algorithms are the combination of multiple signals from diverse sources, as well as the correlation of signals with antigens in a temporal and distributed manner to form a context-aware anomaly detection system [12].
To confirm the algorithm’s basic working principle, it was initially used to classify data provided by the UCI Wisconsin breast cancer data set with signals derived from the data attributes. The original intention for the development of the DCA was its use in an immune-inspired IDS where it has then been applied for the detection of port scans and the detection of botnets in computer networks (cf. [76]), as well as for attack detection in an Open Platform Communications Unified Architecture (OPC UA) framework [163].

6.1. Working Principle

The DCA describes an abstract model of the functioning of dendritic cells based on Matzinger’s danger theory [25]. For this purpose, it uses a population of abstracted dendritic cells, each with a collection of antigens the cell encountered during its life, a finite lifetime with a pre-defined threshold, and a contextual value depending on the concentration of the input signals as described below. As depicted in Figure 4, the original DCA consists of three main stages [12]:
  • Initialization (setting of various parameters);
  • Cell update (event-driven update of variables);
  • Data aggregation.

6.1.1. Cell Update

Until the lifetime of a cell is exceeded (i.e., update stage), each cell iteratively performs three functions:
  • The sampling of antigens;
  • The update of the input signals;
  • The calculation of the cell’s interim output signals.
The core mechanism of the cell update stage is the collection of antigens and signals over the DCs’ lifetime. Four types of input signals are combined to acquire contextual information on the status of the target system (cf. [37]). They are analogous to the natural signals observed in the HIS [35]:
  • PAMP (P) — signals that are known to be pathogenic.
  • Safe (S) — signals that are known to be normal.
  • Danger (D) — signals that indicate changes in behavior.
  • Inflammatory (I) — signals that amplify the other signals.
Based on these input signals, the DCA calculates three intermediate output values:
  • co-stimulatory molecule (CSM): expresses the cell’s maturation status.
  • Semi-mature value: response to a safe environment.
  • Mature value: response to a dangerous environment.
The correlation of input-to-output signals is shown in Figure 5. In this illustration, the thickness of the lines expresses the transforming weights.
In biology, PAMP are occurrences known to be not produced by the host, hence, a clear sign of danger [84]. In the DCA, they lead to an increase in CSM and mature output signals resulting in an earlier maturation with an anomalous context (i.e., mDC). The CSM expresses the maturation status of the cell, that is, whether the cell is ready for antigen presentation [84]. Danger signals are indicators of possible anomalies and influence the CSM and mature output signals but, as can be seen in Figure 5, much lower than the PAMP signals [84]. On the other hand, safe signals suppress the production of the mature output signal (negative weight) and cause an increase in the semi-mature output value. Still, they contribute to the DC’s maturation (i.e., increase of the CSM value).
The intermediate output signals are derived from the input according to the equations presented in [37]:
C c s m = 2 ̲ i = 0 I P i + 1 ̲ i = 0 I D i + 2 ̲ i = 0 I S i · ( 1 + I C )
C s e m i m a t u r e = 0 ̲ i = 0 I P i + 0 ̲ i = 0 I D i + 3 ̲ i = 0 I S i · ( 1 + I C )
C m a t u r e = 2 ̲ i = 0 I P i + 1 ̲ i = 0 I D i + ( 3 ̲ ) i = 0 I S i · ( 1 + I C )
where C c s m , C s e m i m a t u r e , and  C m a t u r e are the intermediate output signals respectively, P i represent the PAMP signals, D i represent the danger signals, S i represent the safe signals, and  I C are inflammatory cytokines. The respective weights of the single terms (underlined numbers) are based on the suggestions in [158].
One effect present in this equation, but not shown in Figure 5, is inflammatory cytokines ( I C ) expressing an already ongoing infection. These signals have an amplifying effect on the other three input signals (i.e., PAMP, danger, and safe).

6.1.2. Data Aggregation

As shown in Figure 4, when a dendritic cell reaches the end of its life (i.e., its CSM value exceeds a defined threshold), its interim output signal concentrations are assessed to define its contextual status (i.e., semi-mature or mature). Based on this information, the accumulated antigens are classified based on whether more dendritic cells experienced this antigen in a normal or an anomalous context (i.e., binary classification). In the case of a dominating semi-mature signal, the group of antigens is assigned a “normal” context; otherwise, it is assigned an “anomalous” context. As opposed to most other immune-inspired algorithms, the DCA uses the collected antigens merely for labeling and tracking of data rather than for detection purposes.

6.1.3. Algorithmic Properties

The DCA was initially designed as an offline anomaly detection algorithm to be applied to network intrusion detection. Due to the replication of the DCs’ functioning, it shows similarities with certain filtering techniques. In addition, the DCA has lower computational complexity (in comparison with other machine-learning techniques) and it does not require extensive training periods [76]. For this reason, it has also shown preliminary success in resource-constrained applications, such as sensor networks and mobile robotics [84].
Since the lifespan of the individual DC instances are limited and influenced by the environment, the DCA forms a filter-based correlation algorithm that includes a time window effect that reduces false positive errors [88]. Initial experiments on the DCA have shown a high accuracy [66], but, depending on the application, also a comparably high FAR (cf. [135]). Additionally, the initial DCA does not involve any learning mechanisms regarding the selection, mapping, and weighting of the signals used, making manual tuning and preparation necessary [162]. Therefore, there is great potential for future improvements regarding the signal sources and their respective mapping.

6.2. Variants and Further Developments

The classical DCA gained promising results but contained stochastic elements and required the fine-tuning of more than ten parameters that made it more challenging to apply. Consequently, its foundation was theoretically analyzed and some simplifications were introduced based on which the deterministic dendritic cell algorithm (dDCA) was proposed in [76]. The main changes concerned:
  • The lifetime of the dendritic cells.
  • The way antigens are sampled and stored.
  • The processing of the input signals.
Regarding the latter, the calculation of the interim output signals was significantly reduced to one signal expressing the maturation (lifetime) status of the cell (i.e., co-stimulatory signal) and a second one keeping track of the experienced system context (i.e., context value). In the dDCA, the co-stimulatory signal is calculated with
c s m = S + D
and the context value is expressed as
k = D 2 S
where D refers to the sum of danger signals and S to the sum of safe signals, respectively. The theoretical analysis for the reduction in these two interim signals is provided in [164]. However, for both, only the danger and safe signals are used. Thus, the special roles of the PAMP and inflammatory processes were neglected. As a consequence, the parameters of the dDCA were reduced to:
  • The input signals (danger and safe).
  • The dendritic cell population size.
  • The lifetime of the single cells.
While the input signals determine the detection capabilities of the dDCA, the population size and lifetime of the dendritic cells influence the smoothing and noise reduction properties of the algorithm, both responsible for decreasing the false positives rate (cf. [162]).
The classical DCA and the dDCA were used for a (binary) classification of offline data. Therefore, all data must be already available when the algorithm is applied. However, many anomaly detection systems require runtime (or even real-time) detection capabilities. A first approach to transform the DCA into a runtime detection algorithm by utilizing segmentation techniques is discussed in [165]. To avoid the need for segmentation, the authors of [166] proposed the minimized dDCA (min-dDCA). Their min-dDCA replaced the usual population sampling strategy with a one-to-one correlation between signals and antigens. Most importantly, they reduced the population size to one single dendritic cell with a lifetime of one iteration. Thus, the dendritic cell assigns a context to the present antigen in each iteration. But the runtime processing comes at the cost of missing result smoothing and noise reduction.

7. Literature Review on Immune-Inspired Approaches

In this section, we provide a review of immune-inspired methods for fault detection. The search was conducted by searching the publication databases IEEE Xplore, ACM Digital Library, ScienceDirect, Springer Link, and Web of Science for the years 2002–2022 using the search string (items are AND connected):
  • “AIS” OR “immune” OR “immunity”
  • “anomaly” OR “abnormal” OR “fault”
  • “wireless sensor network” OR “WSN” OR “sensor node”
To give our review a broader scope, we also included related works that target computer networks, IoT applications, cyber-physical systems (CPSs), and ad hoc networks. The found papers were manually filtered by reviewing the titles and abstracts followed by manual inspection of the content of the remaining papers. A summary of the key characteristics of the final set of relevant papers is provided in Table 1. For an extensive summary of AIS applications for connected devices in general (i.e., IoT devices), we refer an interested reader to the systematic review provided in [167].
The majority of AIS research was focused either on negative selection or the danger theory [13,90,168]. Especially in the field of WSNs, there is a noticeable trend towards danger-theory-based approaches. The reason is mainly scaling problems of the NSA that are even worse when being applied to real network traffic [126]. Secondly, the danger theory’s distributed and simple concept is suitable for most WSN applications [147]. Therefore, we will give a brief overview of the application of immune-inspired techniques for computational problems in general and specific to their use in WSNs.

7.1. Anomaly Detection

Based on the aim of the HIS to keep the host healthy by eliminating threats to proper functioning, several researchers claimed that the HIS forms a natural anomaly detection system [13,16,17]. It can detect pathogens without prior knowledge of their structure [16] and offers a very low FPR, as well as FNR [13], thus making it a perfect example of a (distributed) anomaly detection system.
A combination of negative and clonal selection for network anomaly detection in WSNs is proposed in [91] and an extension of it in [15]. The authors define antigens as random low-level bit patterns and, as in the HIS, let the immunity-inspired mechanisms take care of their evolution.
The ability of the immune system to cope with a dynamic environment is leveraged in [169]. In this work, the authors developed an anomaly detection approach inspired by the clonal selection found in the HIS. Aside from a comparably high accuracy, their approach was shown to be able to cope with a slowly changing environment without triggering false alarms.
In general, AIS-inspired anomaly detection found applications in a great number of different fields [17], such as virus detection [170], intelligent spam mail filter [171], credit card fraud detection [172] or different other computer-security-related topics [18,124].
Table 1. Overview of Immune-inspired Approaches.
Table 1. Overview of Immune-inspired Approaches.
ScopeLocusImmune ConceptTarget System
AuthorsYearData AnomalyNetwork IntrusionFault DiagnosisCentralizedDistributedNegative SelectionClonal SelectionImmune NetworkDanger TheorySensor NetworksComputer NetworksOtherAdaptabilityLearningNotes
Harmer et al. [124]2002
Sarafijanović and Le Boudec [15]2005Initial concept confirmed by simulations
Boukerche et al. [173]2007
Drozda et al. [174]2007Apply random-generate-and-test process
Powers and He [175]2008Negative selection with GA
Liu et al. [176]2008Concept simulated with TOSSIM
Yang et al. [177]2010
Greensmith et al. [66]2010Summary of seminal works on the DCA
Bo Chen [111]2010Applied to structural health monitoring
Laurentys et al. [178]2011
Ou et al. [168]2013Utilizes an adapted DCA
Shamshirband et al. [135]2014
Salvato et al. [169]2015
Xiao et al. [179]2015
Rizwan et al. [180]2015Applies a form of artificial vaccination
Cui et al. [181]2015DCA-based fault diagnosis
Mohapatra and Khilar [182]2017
Sun et al. [183]2018Offline & computation-intense training
Alaparthy and Morgera [73]2018
Li and Cai [184]2018
Alizadeh et al. [185]2018DCA-based fault diagnosis
Akram and Raza [186]2018DCA-based fault diagnosis
Aldhaheri et al. [187]2020DCA-based IDS
Bejoy et al. [188]2022
● considered; ◐ partly considered; ○ not considered.

7.2. Intrusion Detection Systems (IDS)

The application of AIS for network anomaly detection, as part of an IDS has drawn much of the attention of the research community [54,105,173,189,190]. In this context, the expected behavior is usually considered as the self space, and any deviation from it counts as non-self [191]. To increase efficiency while reducing the FAR, hybrid approaches can be beneficial (cf. [175]).
The majority of previous works on immune-inspired IDSs target computer networks [192,193], IoT devices [187], or CPSs [188]. In additions, several works apply immune principles to IDSs applicable to WSNs, such as the negative selection-based IDS for WSNs named WSN-NSA [183], the multi-level IDS for WSNs [73], or Co-FAIS [135], a danger-theory-based IDS that utilizes fuzzified network traffic in WSNs. While early works primarily use negative selection as their underlying detection strategy, a shift towards danger-theory-based concepts is noticeable as discussed above.
For applying AISs to WSNs, the mapping of entities of immunity to those of the WSN is an especially crucial task. For network-based approaches, often the antigen is derived from information extracted from network packets and stored in feature vectors [168]. On the other hand, host-based systems often use operating system (OS)-related information, such as system calls, to derive the antigens [66]. Examples of immune-inspired IDS applied to WSNs are given in [174,176].

7.3. Fault Detection

Concerning the use of AISs for fault detection, immune-inspired approaches can also be used to detect internal deviations rather than focusing on attacks from the outside (similar to the HIS). In this context, several fault diagnosis systems inspired by the HIS have been proposed [194]. As with IDSs, also such systems often assume a fault-free system behavior at the early stages [195]. However, many of these approaches suffer from a high FPR [178].
Based on immune models, a maintenance architecture able to detect faulty behavior has been proposed in [196]. Another network fault diagnosis approach based on AIS is presented in [177]. Specialized systems to detect hardware faults are introduced in [177], as well as systems leveraging co-stimulation in [5,197]. In [184], a danger-theory-based fault diagnosis is applied to identify abnormalities in the energy consumption patterns of monitored equipment in a CPS.
Targeting WSN, a danger-theory-based data-cleaning concept for environmental monitoring WSNs is presented in [179], which considers missing data, faults, and systematic errors in the provided sensor measurements. The fault detection technique in [198] is tailored for WSNs, and consists of a linear-vector-quantization-based training phase and a subsequent AIS-based diagnosis mode. The fault diagnosis algorithm proposed in [182] uses a clonal selection-inspired approach to identify hard and soft faulty sensor nodes in a WSNs. Additionally, for structural health monitoring (SHM) with WSNs, some immune-inspired approaches have been proposed [111,199].
However, as argued in [94], an efficient fault detection system could combine AIS with an artificial endocrine system. The AIS is suitable for detecting low-level faults that can be corrected locally, and the artificial endocrine system is better suited to recognize chronic faults.
An overview of detection approaches based on negative selection, clonal selection, and immune networks is available in [180,200]. For a general overview of fault detection strategies and approaches, we refer to the survey on fault detection in WSNs given in [22,23].

7.4. DCA-Based Fault Detection

In the following, we provide an overview of DCA-based fault detection approaches, which extends the review of DCA-based methods presented in [201].
One of the first works that used the DCA for fault detection was presented in [181]. The authors applied the principles of the DCA on a fault diagnosis system for rotating machinery in industrial facilities. Their input signals focused on the vibration pattern acquired from vibration sensors. Five signals derived from the vibration data, such as kurtosis, were considered. The authors claimed that their approach achieved an overall diagnostic accuracy of over 93 %. However, they gave no details on their implementation and signal combination.
In [185], a DCA-based fault detection system for sensor faults in wind turbines is proposed. The approach used redundant sensor measurements to acquire the input signals for the DCA-based fault detection. In addition, the authors compared their approach with a NSA-based implementation. The results show that both immune-inspired techniques offer a similarly good fault detection rate, but the NSA suffered from a higher false alarm rate.
So far, the only work that incorporates node-level information in an immune-inspired fault detection approach is presented in [186], which was applied to a robotic system. The authors defined a set of so-called health indicators that are used as input for the DCA. These health indicators are derived from operational characteristics on the node level, such as energy consumption, battery level, component temperature readings, and task completion status. All proposed health indicators are calculated as the difference between two consecutive measurements. The authors present an extensive analysis of their approach that resulted in an overall fault detection rate of 98% with only 0.128% false alarm rate.

8. Open Problems and Research Directions

Most AISs and immune-inspired approaches are derived from one of the four “classical AIS theories”. Concerning anomaly or fault detection, primary approaches based on negative selection (self/non-self discrimination) or techniques based on the functioning of the dendritic cells (contextual information fusion) have been proposed. While negative selection techniques dominated the early stages of AIS-based detection systems, an increasing number of dendritic-cell-based algorithms have been proposed over the years (cf. Table 1). The reason for this is the usually high memory consumption and comparably high false positives rate of most negative selection approaches. Both disqualify negative selection approaches, especially from a meaningful use in resource-constrained systems such as WSNs [147].
Aside from the resource requirements, there are several challenges and open problems concerning the use of immune mechanisms for fault detection in WSNs. In the following, we will discuss these issues and the corresponding future research directions toward effective and efficient sensor node fault detection.

8.1. Entity Mapping

One of the most challenging tasks in the adoption and adaptation of immune principles to solve computational problems is a suitable mapping of the biological entities to computational counterparts. In this context, several researchers tried to recreate an AIS including entities similar to those found in the HIS. For example, the DCA uses a population of abstract dendritic cells. Similarly, the IDS presented in [202] uses a virtual thymus that is inspired by its biological similitude. Some researchers even argue that the basic structure of WSNs shows a certain similarity with the biological entities involved in human immunity.
However, adequate mapping is not always easy to find or is even close to impossible. Additionally, the general characteristics of the considered biological systems need to be considered when developing computational models of certain immune mechanisms. As discussed in Section 3.1 and Section 3.3, the HIS involves a vast amount of different cells to distributively perform their tasks. Consequently, in the HIS, quantity has often more effect on the process’ performance than quality.
In computing systems, it is usually the other way around. Most technical systems simply do not have enough components to even remotely replicate the processes of the biological immune system. Although there are WSN that incorporate thousands of sensor nodes, these numbers are no comparison to the number of cells cooperatively proving immunity. Consequently, researchers have to develop suitable abstractions of the underlying processes or need to come up with creative solutions to overcome the limitations of computing systems.

8.2. Feature Selection

In most immune-inspired approaches, the definition and selection of the used features is a manual process that requires a certain level of knowledge and expertise of the target system. This is especially true for the input signals of DCA-based approaches (e.g., danger and safe). Several authors incorporated dimensionality reduction techniques, such as the principal component analysis (PCA) or even mechanisms based on self-organizing maps (SOMs), to introduce some automatism in their feature selection process. However, even in the most sophisticated of these approaches, human intervention, or at least supervision, is necessary to ensure good results.
Additionally, the majority of immunity-based fault detection systems derive the input features purely from the sensor data. In this context, most fault models assume that faults significantly alter the sensed data. However, such data analytical detection approaches suffer from a disability in distinguishing rare but proper events from data anomalies caused by soft faults (cf. [3], Section 2.4). The inclusion of node-level diagnostic information is only sparsely addressed in related work (cf. [23,203]).
In addition, the majority of related works focus on the suitable adoption and adaptation of immune mechanisms to develop effective detection algorithms. The algorithms’ input data are mostly treated as a means to an end but are often not particularly considered. However, we found that often the input data are the essential part, and the detection algorithm is mostly a vehicle for the automated assessment. In other words, the best and most efficient algorithm still relies on the quality of the input data. Nevertheless, the algorithm has an impact on the characteristics of the detection approach and, thus, its final results regarding its effectiveness and efficiency.

8.3. Learning Capabilities

While negative and clonal selection-based techniques offer at least some degree of learning, especially approaches inspired by the danger theory, such as those based on the DCA, do not include learning capabilities. However, several works proposed concepts to incorporate machine learning for two purposes: (i) an automated selection, mapping, and weighting of the input parameter and/or (ii) an introduction of immune memory. The latter refers to the capability of the system to learn from previous fault encounters to achieve a faster reaction in case the same situation is experienced again.
Concerning approaches based on the DCA, a theoretical analysis of the algorithmic basis revealed that it is a collection of linear classifiers [164]. To cope with these limitations, several works suggested replacing the classification stage of the DCA with machine-learning capabilities (cf. [201]). Especially the use of fuzzy inference systems has gained promising results [204]. Such approaches, however, entail a significant overhead on the memory and processing that prevent them from being used in resource-constrained systems such as WSNs.
As a consequence, resource-efficient ways need to be found to incorporate learning capabilities into lightweight approaches suitable for resource-constrained sensor nodes. Some seminal works have successfully combined immune mechanisms with other bio-inspired techniques, such as GA, to form an efficient hybrid system that is also capable of learning (cf. [175]).

8.4. Data Correlation

The majority of found approaches utilize the temporal correlation of subsequent measurements of the sensor nodes for their fault detection. Since the HIS also performs a spatial correlation of the information collected in the tissue, the detection strategy of respective sensor node fault detection approaches could also support spatio-temporal correlations. For example, the detection could consider the data of several sensor nodes within a certain neighborhood. Such a spatio-temporal fault detection, however, requires a suitable format of the input data that is not trivial to define.

9. Conclusions

In this article, we presented a literature review of immune-inspired fault detection approaches for wireless sensor networks (WSNs). After a brief introduction and the motivation for applying immune-inspired techniques to detect sensor node faults in Section 1, we provided an excursion to the history of immunology and the prevalent immunological models in Section 2. In this context, we highlighted the unique properties of the immune system that are desirable for computational detection approaches as discussed in Section 3. Considering their use for fault detection, especially immune mechanisms as described by the danger theory (discussed in Section 4), show promising characteristics. In Section 5, the four “classical” AIS theories are presented. For fault detection purposes, especially the danger-theory-based dendritic cell algorithm (DCA), as discussed in Section 6, showed promising results. As the core of this article, we discussed related works proposed in the last two decades in Section 7. We combined the found information with findings from our previous works to elaborate on the most important limitations and shortcomings of current immune-inspired fault detection approaches based on which we provide corresponding future research directions in Section 8.
Regarding the limitations of current approaches, we particularly found that the detection of sensor node faults has most often been considered a data anomaly detection task and was often purely performed on the sensor data only. Utilizing data anomaly detection for fault diagnosis suffers from a crucial problem: anomalies do not need to be caused by faulty sensor nodes. Similarly, not all node faults cause distinct irregularities in the reported sensor data. Therefore, such approaches suffer from a disability in distinguishing between data events and fault-induced deviations.
Aside from the input data, there are some additional open questions concerning an appropriate mapping between biological and computational entities. Additionally, a suitable selection scheme for the most expressive features utilizable for fault detection remains a non-trivial task. Moreover, the majority of current immune-inspired detection approaches have limited to no learning capabilities, which hinders the exploitation of effective concepts such as immune memory found in human immunity. Similarly, most approaches focus on temporal correlations in the data and neglect spatial information that together could be leveraged for an effective spatio-temporal fault detection scheme.
To sum up, a lot of effort has been spent in the last two decades for the development of sensor node fault detection approaches that took inspiration from processes found in the human immune system (HIS). Although they yielded comparably good and promising results, there are still many unresolved issues and open questions that need to be answered to achieve effective yet efficient fault detection in WSN.

Author Contributions

Conceptualization, D.W., K.M.G. and W.K.; Data curation, D.W.; Investigation, D.W.; Methodology, D.W., K.M.G. and W.K.; Project administration, K.M.G.; Resources, D.W.; Software, D.W.; Supervision, K.M.G. and W.K.; Validation, D.W.; Visualization, D.W.; Writing—original draft, D.W.; Writing—review and editing, K.M.G. and W.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work has been supported by the Doctoral College Resilient Embedded Systems, which is ran jointly by the TU Wien’s Faculty of Informatics and the UAS Technikum Wien.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AINartificial immune network
AIRSartificial immune recognition system
AISartificial immune system
ANNartificial neural network
APCantigen-presenting cell
CLONALGclonal selection algorithm
CPScyber-physical system
CSMco-stimulatory molecule
CSPRAconserved self pattern recognition algorithm
DCdendritic cell
DCAdendritic cell algorithm
dDCAdeterministic dendritic cell algorithm
FARfalse alarm rate
FNRfalse negative rate
FPRfalse positive rate
GAgenetic algorithm
HIShuman immune system
IDSintrusion detection system
INSinfectious non-self
IoTInternet of Things
min-dDCAminimized dDCA
NSAnegative selection algorithm
OPC UAOpen Platform Communications Unified Architecture
OSoperating system
PAMPpathogen associated molecular patterns
PCAprincipal component analysis
PRRpattern recognition receptor
SHMstructural health monitoring
SOMself-organizing map
SNSself/non-self
TLRtoll-like receptors
TNRtrue negative rate
TPRtrue positive rate
WSNwireless sensor network

References

  1. Mahmoud, H.; Fahmy, A. WSN Applications. In Concepts, Applications, Experimentation and Analysis of Wireless Sensor Networks; Springer International Publishing: Berlin/Heidelberg, Germany, 2020. [Google Scholar] [CrossRef]
  2. Widhalm, D. Sensor Node Fault Detection in Wireless Sensor Networks: An Immune-inspired Approach. Ph.D. Dissertation, Vienna University of Technology, Vienna, Austria, 2022. [Google Scholar] [CrossRef]
  3. Widhalm, D.; Goeschka, K.M.; Kastner, W. An Open-Source Wireless Sensor Node Platform with Active Node-Level Reliability for Monitoring Applications. Sensors 2021, 21, 7613. [Google Scholar] [CrossRef] [PubMed]
  4. Jurdak, R.; Wang, X.R.; Obst, O.; Valencia, P. Wireless Sensor Network Anomalies: Diagnosis and Detection Strategies. In Intelligence-Based Systems Engineering; Springer: Berlin/Heidelberg, Germany, 2011; pp. 309–325. [Google Scholar] [CrossRef]
  5. Burgess, M. Computer Immunology. In Proceedings of the 12th USENIX Conference on System Administration, LISA ’98, Boston, MA, USA, 6–11 December 1998; USENIX Association: Berkeley, CA, USA, 1998; pp. 283–298. [Google Scholar]
  6. Somayaji, A.; Hofmeyr, S.; Forrest, S. Principles of a Computer Immune System. In Proceedings of the 1997 Workshop on New Security Paradigms, NSPW ’97, Langdale, UK, 23–26 September 1997; ACM: New York, NY, USA, 1997; pp. 75–82. [Google Scholar] [CrossRef] [Green Version]
  7. Hong, L.; Yang, J. Danger theory of immune systems and intrusion detection systems. In Proceedings of the 2009 International Conference on Industrial Mechatronics and Automation, Changchun, China, 9–12 August 2009; pp. 208–211. [Google Scholar] [CrossRef]
  8. Kim, J.; Bentley, P.; Wallenta, C.; Ahmed, M.; Hailes, S. Danger Is Ubiquitous: Detecting Malicious Activities in Sensor Networks Using the Dendritic Cell Algorithm. In Artificial Immune Systems, Proceedings of the 5th International Conference, ICARIS 2006, Oeiras, Portugal, 4–6 September 2006; Bersini, H., Carneiro, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 390–403. [Google Scholar]
  9. Twycross, J.; Aickelin, U. Information fusion in the immune system. Inf. Fusion 2010, 11, 35–44. [Google Scholar] [CrossRef] [Green Version]
  10. Burgess, M.; Haugerud, H.; Straumsnes, S.; Reitan, T. Measuring System Normality. ACM Trans. Comput. Syst. 2002, 20, 125–160. [Google Scholar] [CrossRef]
  11. D’haeseleer, P.; Forrest, S.; Helman, P. An immunological approach to change detection: Algorithms, analysis and implications. In Proceedings of the 1996 IEEE Symposium on Security and Privacy, Oakland, CA, USA, 6–8 May 1996; pp. 110–119. [Google Scholar] [CrossRef] [Green Version]
  12. Greensmith, J.; Aickelin, U.; Twycross, J. Articulation and Clarification of the Dendritic Cell Algorithm. In Artificial Immune Systems, Proceedings of the 5th International Conference, ICARIS 2006, Oeiras, Portugal, 4–6 September 2006; Bersini, H., Carneiro, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
  13. Aickelin, U.; Greensmith, J.; Twycross, J. Immune System Approaches to Intrusion Detection—A Review. In Artificial Immune Systems, Proceedings of the 3rd International Conference, ICARIS 2004, Catania, Italy, 13–16 September 2004; Nicosia, G., Cutello, V., Bentley, P.J., Timmis, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 316–329. [Google Scholar]
  14. Hofmeyr, S.A.; Forrest, S. Immunity by Design: An Artificial Immune System. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation, GECCO’99, Orlando, FL, USA, 13–17 July 1999; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1999; Volume 2, pp. 1289–1296. [Google Scholar]
  15. Sarafijanovic, S.; Le Boudec, J. An artificial immune system approach with secondary response for misbehavior detection in mobile ad hoc networks. IEEE Trans. Neural Netw. 2005, 16, 1076–1087. [Google Scholar] [CrossRef]
  16. Kim, J.; Bentley, P.J.; Aickelin, U.; Greensmith, J.; Tedesco, G.; Twycross, J. Immune system approaches to intrusion detection—A review. Nat. Comput. 2007, 6, 413–466. [Google Scholar] [CrossRef] [Green Version]
  17. Hart, E.; Timmis, J. Application areas of AIS: The past, the present and the future. Appl. Soft Comput. 2008, 8, 191–201. [Google Scholar] [CrossRef]
  18. Anchor, K.P.; Williams, P.D.; Gunsch, G.H.; Lamont, G.B. The computer defense immune system: Current and future research in intrusion detection. In Proceedings of the 2002 Congress on Evolutionary Computation (CEC’02), Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1027–1032. [Google Scholar] [CrossRef]
  19. Ramotsoela, D.; Abu-Mahfouz, A.; Hancke, G. A Survey of Anomaly Detection in Industrial Wireless Sensor Networks with Critical Water System Infrastructure as a Case Study. Sensors 2018, 18, 2491. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Khalastchi, E.; Kalech, M. Fault Detection and Diagnosis in Multi-Robot Systems: A Survey. Sensors 2019, 19, 4019. [Google Scholar] [CrossRef] [Green Version]
  21. Malhotra, N.; Bala, M. Fault Diagnosis in Wireless Sensor Networks—A Survey. In Proceedings of the 2018 4th International Conference on Computing Sciences (ICCS), Phagwara, India, 30–31 August 2018; pp. 28–34. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Mehmood, A.; Shu, L.; Huo, Z.; Zhang, Y.; Mukherjee, M. A Survey on Fault Diagnosis in Wireless Sensor Networks. IEEE Access 2018, 6, 11349–11364. [Google Scholar] [CrossRef]
  23. Widhalm, D.; Goeschka, K.M.; Kastner, W. SoK: A Taxonomy for Anomaly Detection in Wireless Sensor Networks Focused on Node-Level Techniques. In Proceedings of the 15th International Conference on Availability, Reliability and Security (ARES ’20), online, 25–28 August 2020. [Google Scholar] [CrossRef]
  24. Eichmann, K. (Ed.) The idiotypic network theory. In The Network Collective: Rise and Fall of a Scientific Paradigm; Birkhäuser Basel: Basel, Switzerland, 2008; pp. 82–94. [Google Scholar] [CrossRef]
  25. Matzinger, P. The danger model: A renewed sense of self. Science 2002, 296, 301–305. [Google Scholar] [CrossRef]
  26. Aickelin, U.; Cayzer, S. The Danger Theory and Its Application to Artificial Immune Systems. CoRR 2002, 2002, abs/0801.3549. [Google Scholar] [CrossRef] [Green Version]
  27. Burnet, F.M. A Modification of Jerne’s Theory of Antibody Production using the Concept of Clonal Selection. CA Cancer J. Clin. 1976, 26, 119–121. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Fekety, F.R. The Clonal Selection Theory of Acquired Immunity. Yale J. Biol. Med. 1960, 32, 480. [Google Scholar]
  29. Oudin, J.; Cazenave, P.A. Similar Idiotypic Specificities in Immunoglobulin Fractions with Different Antibody Functions or Even without Detectable Antibody Function. Proc. Natl. Acad. Sci. USA 1971, 68, 2616–2620. [Google Scholar] [CrossRef] [Green Version]
  30. Bretscher, P.; Cohn, M. A Theory of Self-Nonself Discrimination: Paralysis and induction involve the recognition of one and two determinants on an antigen, respectively. Science 1970, 169, 1042–1049. [Google Scholar] [CrossRef]
  31. Jerne, N. Towards a network theory of the immune system. Ann. Immunol. 1974, 125C, 373–389. [Google Scholar]
  32. Langman, R.; Cohn, M. The ‘complete’ idiotype network is an absurd immune system. Immunol. Today 1986, 7, 100–101. [Google Scholar] [CrossRef]
  33. Lafferty, K.; Cunningham, A. A new analysis of allogeneic interactions. Aust. J. Exp. Biol. Med Sci. 1975, 53, 27–42. [Google Scholar] [CrossRef]
  34. Janeway, C. Approaching the Asymptote? Evolution and Revolution in Immunology. Cold Spring Harb. Symp. Quant. Biol. 1989, 54, 1–13. [Google Scholar] [CrossRef]
  35. Matzinger, P. Tolerance, danger, and the extended family. Annu. Rev. Immunol. 1994, 12, 991–1045. [Google Scholar] [CrossRef]
  36. Mosmann, T.R.; Livingstone, A.M. Dendritic cells: The immune information management experts. Nat. Immunol. 2004, 5, 564–566. [Google Scholar] [CrossRef]
  37. Greensmith, J.; Aickelin, U.; Cayzer, S. Introducing Dendritic Cells as a Novel Immune-Inspired Algorithm for Anomaly Detection. In Artificial Immune Systems, Proceedings of the 4th International Conference, ICARIS 2005, Banff, AB, Canada, 14–17 August 2005; Jacob, C., Pilat, M.L., Bentley, P.J., Timmis, J.I., Eds.; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar] [CrossRef]
  38. Xu, Q.z.; Wang, L. Recent advances in the artificial endocrine system. J. Zhejiang Univ. SCIENCE C 2011, 12, 171–183. [Google Scholar] [CrossRef]
  39. Sherwood, L. Human Physiology: From Cells to Systems; Cengage Learning: Boston, MA, USA, 2015. [Google Scholar]
  40. Neal, J. How the Endocrine System Works; The How it Works Series; Wiley: Hoboken, NJ, USA, 2016. [Google Scholar]
  41. Sinha, S.; Chaczko, Z. Concepts and Observations in Artificial Endocrine Systems for IoT Infrastructure. In Proceedings of the 2017 25th International Conference on Systems Engineering (ICSEng), Las Vegas, NV, USA, 22–24 August 2017; pp. 427–430. [Google Scholar] [CrossRef]
  42. Ihara, H.; Mori, K. Autonomous Decentralized Computer Control Systems. Computer 1984, 17, 57–66. [Google Scholar] [CrossRef]
  43. Miyamoto, S.; Mori, K.; Ihara, H.; Matsumaru, H.; Ohshima, H. Autonomous decentralized control and its application to the rapid transit system. Comput. Ind. 1984, 5, 115–124. [Google Scholar] [CrossRef]
  44. Mori, K. Autonomous Decentralized Systems Technologies and Their Application to a Train Transport Operation System. In The Kluwer International Series in Engineering and Computer Science; Springer: New York, NY, USA, 2001; pp. 89–111. [Google Scholar] [CrossRef]
  45. Shen, W.M.; Chuong, C.M.; Will, P. Digital Hormone Models for Self-Organization In Artificial Life VIII, Standish, Abbass, Bedau; MIT Press: Cambridge, MA, USA, 2002; pp. 116–120. [Google Scholar]
  46. Wei-Min, S.; Cheng-Ming, C.; Will, P. Simulating self-organization for multi-robot systems. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lusanne, Switzerland, 30 September–4 October 2002; Volume 3, pp. 2776–2781. [Google Scholar] [CrossRef] [Green Version]
  47. Heylighen, F.; Gershenson, C.; Staab, S.; Flake, G.W.; Pennock, D.M.; Fain, D.C.; De Roure, D.; Aberer, K.; Wei-Min, S.; Dousse, O. Neurons, viscose fluids, freshwater polyp hydra-and self-organizing information systems. IEEE Intell. Syst. 2003, 18, 72–86. [Google Scholar] [CrossRef]
  48. Kravitz, E. Hormonal control of behavior: Amines and the biasing of behavioral output in lobsters. Science 1988, 241, 1775–1781. [Google Scholar] [CrossRef]
  49. Brooks, R.A. Integrated Systems Based on Behaviors. SIGART Bull. 1991, 2, 46–50. [Google Scholar] [CrossRef] [Green Version]
  50. Avila-Garcia, O.; Canamero, L. Using hormonal feedback to modulate action selection in a competitive scenario. In From Animals to Animats, Proceedings of the 8th International Conference of Adaptive Behavior (SAB’04), Santa Monica, LA, USA, 24 Auguest 2004; MIT Press: Cambridge, MA, USA, 2004; pp. 243–252. [Google Scholar]
  51. Avila-Garcia, O.; Canamero, L. Hormonal modulation of perception in motivation-based action selection architectures. In Procs of the Symposium on Agents that Want and Like; University of Hertfordshire: Hertfordshire, UK, 2005. [Google Scholar]
  52. Brinkschulte, U.; Pacher, M.; von Renteln, A. An Artificial Hormone System for Self-Organizing Real-Time Task Allocation in Organic Middleware. In Organic Computing; Springer: Berlin/Heidelberg, Germany, 2009; pp. 261–283. [Google Scholar] [CrossRef]
  53. von Renteln, A.; Brinkschulte, U.; Pacher, M. The Artificial Hormone System—An Organic Middleware for Self-organising Real-Time Task Allocation. In Organic Computing — A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 369–384. [Google Scholar] [CrossRef]
  54. de Castro, L.N.; Timmis, J.I. Artificial immune systems as a novel soft computing paradigm. Soft Comput.—Fusion Found. Methodol. Appl. 2003, 7, 526–544. [Google Scholar] [CrossRef]
  55. Dasgupta, D. Artificial Immune Systems and Their Applications; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  56. Aickelin, U.; Bentley, P.; Cayzer, S.; Kim, J.; McLeod, J. Danger Theory: The Link between AIS and IDS? In Proceedings of the Artificial Immune Systems; Timmis, J., Bentley, P.J., Hart, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 147–155. [Google Scholar]
  57. Ki-Won, Y.; Ji-Hyung, P. An Artificial Immune System Model for Multi Agents based Resource Discovery in Distributed Environments. In Proceedings of the 1st International Conference on Innovative Computing, Information and Control - Volume I (ICICIC’06), Beijing, China, 30 August–1 September 2006; Volume 1, pp. 234–239. [Google Scholar] [CrossRef]
  58. Goldsby, R.A.; Goldsby, R.A.K.i. Immunology, 5th ed.; W.H. Freeman: New York, NY, USA, 2003. [Google Scholar]
  59. Janeway, C.A. How the Immune System Recognizes Invaders. Sci. Am. 1993, 269, 72–79. [Google Scholar] [CrossRef]
  60. Janeway, C.A.; Medzhitov, R. Innate Immune Recognition. Annu. Rev. Immunol. 2002, 20, 197–216. [Google Scholar] [CrossRef] [Green Version]
  61. Alberts, B.; Johnson, A.; Lewis, J.; Raff, M.; Roberts, K.; Walter, P. Molecular Biology of the Cell, 4th ed.; Garland Science: New York, NY, USA, 2002. [Google Scholar]
  62. Dasgupta, D.; Yu, S.; Majumdar, N.S. MILA—Multilevel immune learning algorithm and its application to anomaly detection. Soft Comput. 2005, 9, 172–184. [Google Scholar] [CrossRef]
  63. Twycross, J.; Aickelin, U. Towards a Conceptual Framework for Innate Immunity. In Artificial Immune Systems, Proceedings of the ICARIS 2008, Phuket, Thailand, 10–13 August 2008; Jacob, C., Pilat, M.L., Bentley, P.J., Timmis, J.I., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 112–125. [Google Scholar]
  64. Vivier, E.; Malissen, B. Innate and adaptive immunity: Specificities and signaling hierarchies revisited. Nat. Immunol. 2005, 6, 17–21. [Google Scholar] [CrossRef] [PubMed]
  65. Kim, J.; Bentley, P. Immune Memory and Gene Library Evolution in the Dynamic Clonal Selection Algorithm. Genet. Program. Evol. Mach. 2004, 5, 361–391. [Google Scholar] [CrossRef]
  66. Greensmith, J.; Aickelin, U.; Tedesco, G. Information fusion for anomaly detection with the dendritic cell algorithm. Inf. Fusion 2010, 11, 21–34. [Google Scholar] [CrossRef] [Green Version]
  67. Aickelin, U.; Dasgupta, D. Artificial Immune Systems. In Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques; Springer US: Boston, MA, USA, 2005; pp. 375–399. [Google Scholar] [CrossRef]
  68. Vidal, J.M.; Orozco, A.L.S.; Villalba, L.J.G. Adaptive artificial immune networks for mitigating DoS flooding attacks. Swarm Evol. Comput. 2018, 38, 94–108. [Google Scholar] [CrossRef]
  69. Delves, P.J.; Martin, S.J.; Burton, D.R.; Roitt, I.M. Roitt’s Essential Immunology, 13th ed.; Essentials, Wiley-Blackwell: Hoboken, NJ, USA, 2017. [Google Scholar]
  70. Venkatesan, S.; Baskaran, R.; Chellappan, C.; Vaish, A.; Dhavachelvan, P. Artificial immune system based mobile agent platform protection. Comput. Stand. Interfaces 2013, 35, 365–373. [Google Scholar] [CrossRef]
  71. Coico, R.; Sunshine, G. Immunology: A Short Course, 7th ed.; Coico, Immunology; Wiley-Blackwell: Hoboken, NJ, USA, 2015. [Google Scholar]
  72. Green, D.R.; Droin, N.; Pinkoski, M. Activation-induced cell death in T cells. Immunol. Rev. 2003, 193, 70–81. [Google Scholar] [CrossRef]
  73. Alaparthy, V.T.; Morgera, S.D. A Multi-Level Intrusion Detection System for Wireless Sensor Networks Based on Immune Theory. IEEE Access 2018, 6, 47364–47373. [Google Scholar] [CrossRef]
  74. Jacob, C.; Steil, S.; Bergmann, K. The Swarming Body: Simulating the Decentralized Defenses of Immunity. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; pp. 52–65. [Google Scholar] [CrossRef]
  75. Punt, J. Kuby Immunology; W. H. Freeman: New York, NY, USA, 2018. [Google Scholar]
  76. Greensmith, J.; Aickelin, U. The Deterministic Dendritic Cell Algorithm. In Artificial Immune Systems, Proceedings of the ICARIS 2008, Phuket, Thailand, 10–13 August 2008; Springer: Berlin/Heidelberg, 2008. [Google Scholar] [CrossRef] [Green Version]
  77. Bentley, P.J.; Greensmith, J.; Ujjin, S. Two Ways to Grow Tissue for Artificial Immune Systems. In Artificial Immune Systems, Proceedings of the 4th International Conference, ICARIS 2005, Banff, AB, Canada, 14–17 August 2005; Pilat, M.L., Bentley, P.J., Timmis, J.I., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 139–152. [Google Scholar]
  78. Pradeu, T.; Cooper, E.L. The danger theory: 20 years later. Front. Immunol. 2012, 3, 287. [Google Scholar] [CrossRef] [Green Version]
  79. Matzinger, P. An innate sense of danger. Semin. Immunol. 1998, 10, 399–415. [Google Scholar] [CrossRef]
  80. Sompayrac, L.M. How the Immune System Works; Wiley-Blackwell: Hoboken, NJ, USA, 2019. [Google Scholar]
  81. Gallucci, S.; Matzinger, P. Danger signals: SOS to the immune system. Curr. Opin. Immunol. 2001, 13, 114–119. [Google Scholar] [CrossRef] [PubMed]
  82. Kerr, J.F.R.; Winterford, C.M.; Harmon, B.V. Apoptosis. Its significance in cancer and cancer Therapy. Cancer 1994, 73, 2013–2026. [Google Scholar] [CrossRef] [PubMed]
  83. Steinman, R.M. Identification of a novel cell type in peripheral lymphoid organs of mice: I. morphology, quantitation, tissue distribution. J. Exp. Med. 1973, 137, 1142–1162. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Greensmith, J.; Aickelin, U.; Cayzer, S. Detecting Danger: The Dendritic Cell Algorithm. In Robust Intelligent Systems; Springer: London, UK, 2008; pp. 89–112. [Google Scholar] [CrossRef] [Green Version]
  85. Kapsenberg, M.L. Dendritic-cell control of pathogen-driven T-cell polarization. Nat. Rev. Immunol. 2003, 3, 984–993. [Google Scholar] [CrossRef]
  86. Kim, J.; Greensmith, J.; Twycross, J.; Aickelin, U. Malicious Code Execution Detection and Response Immune System inspired by the Danger Theory. arXiv 2010, arXiv:1003.4142. [Google Scholar]
  87. Medzhitov, R. Decoding the Patterns of Self and Nonself by the Innate Immune System. Science 2002, 296, 298–300. [Google Scholar] [CrossRef] [Green Version]
  88. Greensmith, J. The Dendritic Cell Algorithm. Ph.D. Thesis, University of Nottingham, Nottingham, UK, 2007. [Google Scholar]
  89. Del Ser, J.; Osaba, E.; Molina, D.; Yang, X.S.; Salcedo-Sanz, S.; Camacho, D.; Das, S.; Suganthan, P.N.; Coello Coello, C.A.; Herrera, F. Bio-inspired computation: Where we stand and what’s next. Swarm Evol. Comput. 2019, 48, 220–250. [Google Scholar] [CrossRef]
  90. Dasgupta, D.; Yu, S.; Nino, F. Recent Advances in Artificial Immune Systems: Models and Applications. Appl. Soft Comput. 2011, 11, 1574–1587. [Google Scholar] [CrossRef]
  91. Le Boudec, J.Y.; Sarafijanović, S. An Artificial Immune System Approach to Misbehavior Detection in Mobile Ad Hoc Networks. In Proceedings of the Biologically Inspired Approaches to Advanced Information Technology, Lausanne, Switzerland, 29–30 January 2004; Ijspeert, A.J., Murata, M., Wakamiya, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 396–411. [Google Scholar]
  92. Timmis, J.; Hone, A.; Stibor, T.; Clark, E. Theoretical advances in artificial immune systems. Theor. Comput. Sci. 2008, 403, 11–32. [Google Scholar] [CrossRef] [Green Version]
  93. Mak, T.W. Order from disorder sprung: Recognition and regulation in the immune system. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 2003, 361, 1235–1250. [Google Scholar] [CrossRef] [Green Version]
  94. Read, M.; Andrews, P.S.; Timmis, J. An Introduction to Artificial Immune Systems. In Handbook of Natural Computing; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1575–1597. [Google Scholar] [CrossRef]
  95. Becker, M.; Drozda, M.; Jaschke, S.; Schaust, S. Comparing performance of misbehavior detection based on Neural Networks and AIS. In Proceedings of the 2008 IEEE International Conference on Systems, Man and Cybernetics, Singapore, 12–15 October 2008; pp. 757–762. [Google Scholar] [CrossRef]
  96. Yu, S.; Dasgupta, D. Conserved Self Pattern Recognition Algorithm. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; pp. 279–290. [Google Scholar] [CrossRef]
  97. Timmis, J.; Andrews, P.; Owens, N.; Clark, E. An interdisciplinary perspective on artificial immune systems. Evol. Intell. 2008, 1, 5–26. [Google Scholar] [CrossRef] [Green Version]
  98. Forrest, S.; Beauchemin, C. Computer immunology. Immunol. Rev. 2007, 216, 176–197. [Google Scholar] [CrossRef] [PubMed]
  99. Dasgupta, D. Advances in artificial immune systems. IEEE Comput. Intell. Mag. 2006, 1, 40–49. [Google Scholar] [CrossRef]
  100. Chang, P.C.; Huang, W.H.; Ting, C.J. A hybrid genetic-immune algorithm with improved lifespan and elite antigen for flow-shop scheduling problems. Int. J. Prod. Res. 2011, 49, 5207–5230. [Google Scholar] [CrossRef]
  101. Coello, C.A.C.; Cortes, N.C. Solving Multiobjective Optimization Problems Using an Artificial Immune System. Genet. Program. Evolvable Mach. 2005, 6, 163–190. [Google Scholar] [CrossRef]
  102. Luo, X.; Wei, W. A New Immune Genetic Algorithm and Its Application in Redundant Manipulator Path Planning. J. Robot. Syst. 2004, 21, 141–151. [Google Scholar] [CrossRef]
  103. Graaff, A.; Engelbrecht, A. Optimised Coverage of Non-self with Evolved Lymphocytes in an Artificial Immune System. Int. J. Comput. Intell. Res. Res. India Publ. 2006, 2, 973–1873. [Google Scholar] [CrossRef]
  104. Kim, J.; Bentley, P. Negative Selection and Niching by an Artificial Immune System for Network Intrusion Detection. In Proceedings of the Late Breaking Papers at the 1999 Genetic and Evolutionary Computation Conference, Orlando, FL, USA, 13 July 1999; pp. 149–158. [Google Scholar]
  105. Forrest, S.; Perelson, A.S.; Allen, L.; Cherukuri, R. Self-nonself discrimination in a computer. In Proceedings of the 1994 IEEE Computer Society Symposium on Research in Security and Privacy, Oakland, CA, USA, 16–18 May 1994; pp. 202–212. [Google Scholar] [CrossRef]
  106. Cohn, M.; Mitchison, N.A.; Paul, W.E.; Silverstein, A.M.; Talmage, D.W.; Weigert, M. Reflections on the clonal-selection theory. Nat. Rev. Immunol. 2007, 7, 823–830. [Google Scholar] [CrossRef]
  107. Costa Silva, G.; Dasgupta, D. A Survey of Recent Works in Artificial Immune Systems. In Handbook on Computational Intelligence; World Scientific: Singapore, 2016; Volume 2, pp. 547–586. [Google Scholar] [CrossRef] [Green Version]
  108. Dasgupta, D.; Gonzalez, F. An immunity-based technique to characterize intrusions in computer networks. IEEE Trans. Evol. Comput. 2002, 6, 281–291. [Google Scholar] [CrossRef]
  109. Mostardinha, P.; Faria, B.F.; Zúquete, A.; de Abreu, F.V. A Negative Selection Approach to Intrusion Detection. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; pp. 178–190. [Google Scholar] [CrossRef]
  110. Greensmith, J.; Feyereisl, J.; Aickelin, U. The DCA: SOMe Comparison: A comparative study between two biologically-inspired algorithms. arXiv 2008, arXiv:1006.1518. [Google Scholar] [CrossRef]
  111. Chen, B. Agent-based artificial immune system approach for adaptive damage detection in monitoring networks. J. Netw. Comput. Appl. 2010, 33, 633–645. [Google Scholar] [CrossRef]
  112. Ayara, M.; Timmis, J.; Lemos, R.; De Castro, L.; Duncan, R. Negative selection: How to generate detectors. In Proceedings of the 1st International Conference on Artificial Immune Systems (ICARIS), Canterbury, UK, 9–11 September 2002. [Google Scholar]
  113. Lu, H. Artificial Immune System for Anomaly Detection. In Proceedings of the 2008 IEEE International Symposium on Knowledge Acquisition and Modeling Workshop, Wuhan, China, 21–22 December 2008; pp. 340–343. [Google Scholar] [CrossRef]
  114. Balthrop, J.; Esponda, F.; Forrest, S.; Glickman, M. Coverage and Generalization in an Artificial Immune System. In Proceedings of the 4th Annual Conference on Genetic and Evolutionary Computation, GECCO’02, New York, NY, USA, 9–13 July 2002; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2002; pp. 3–10. [Google Scholar]
  115. Shapiro, J.M.; Lamont, G.B.; Peterson, G.L. An Evolutionary Algorithm to Generate Hyper-ellipsoid Detectors for Negative Selection. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, GECCO ’05, Washington, DC, USA, 25–29 June 2005; ACM: New York, NY, USA, 2005; pp. 337–344. [Google Scholar] [CrossRef] [Green Version]
  116. Jungwon, K.; Bentley, P.J. Towards an artificial immune system for network intrusion detection: An investigation of clonal selection with a negative selection operator. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Korea, 27–30 May 2001; Volume 2, pp. 1244–1252. [Google Scholar] [CrossRef]
  117. Ji, Z.; Dasgupata, D. Augmented negative selection algorithm with variable-coverage detectors. In Proceedings of the Congress on Evolutionary Computation, Portland, OR, USA, 19–23 June 2004; Volume 1, pp. 1081–1088. [Google Scholar] [CrossRef] [Green Version]
  118. Ji, Z.; Dasgupta, D. Real-Valued Negative Selection Algorithm with Variable-Sized Detectors. In Genetic and Evolutionary Computation—GECCO 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 287–298. [Google Scholar] [CrossRef]
  119. González, F. A Study of Artificial Immune Systems Applied to Anomaly Detection. Ph.D. Thesis, The University of Memphis, Memphis, TN, USA, 2003. [Google Scholar]
  120. Timmis, J.; Andrews, P.; Hart, E. On artificial immune systems and swarm intelligence. Swarm Intell. 2010, 4, 247–273. [Google Scholar] [CrossRef]
  121. Nanas, N.; Uren, V.S.; de Roeck, A. Nootropia: A User Profiling Model Based on a Self-Organising Term Network. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; pp. 146–160. [Google Scholar] [CrossRef]
  122. McEwan, C.; Hart, E. Representation in the (Artificial) Immune System. J. Math. Model. Algorithms 2009, 8, 125–149. [Google Scholar] [CrossRef] [Green Version]
  123. González, F.; Dasgupta, D.; Gómez, J. The Effect of Binary Matching Rules in Negative Selection. In Genetic and Evolutionary Computation—GECCO 2003; Springer: Berlin/Heidelberg, Germany, 2003; pp. 195–206. [Google Scholar] [CrossRef]
  124. Harmer, P.K.; Williams, P.D.; Gunsch, G.H.; Lamont, G.B. An artificial immune system architecture for computer security applications. IEEE Trans. Evol. Comput. 2002, 6, 252–280. [Google Scholar] [CrossRef] [Green Version]
  125. Farmer, J.; Packard, N.H.; Perelson, A.S. The immune system, adaptation, and machine learning. Phys. D Nonlinear Phenom. 1986, 22, 187–204. [Google Scholar] [CrossRef]
  126. Kim, J.; Bentley, P.J. An Evaluation of Negative Selection in an Artificial Immune System for Network Intrusion Detection. In Proceedings of the Genetic and Evolutionary Computation Conference GECCO ’01, San Francisco, CA, USA, 7–11 July 2001; Morgan Kaufmann: San Francisco, CA, USA, 2001; pp. 1330–1337. [Google Scholar]
  127. Balthrop, J.; Forrest, S.; Glickman, M. Revisiting LISYS: Parameters and normal behavior. In Proceedings of the 2002 Congress on Evolutionary Computation (CEC’02), Honolulu, HI, USA, 12–17 May 2002. [Google Scholar] [CrossRef]
  128. Gao, X.Z.; Ovaska, S.J.; Wang, X. Genetic Algorithms-based Detector Generation in Negative Selection Algorithm. In Proceedings of the 2006 IEEE Mountain Workshop on Adaptive and Learning Systems, Logan, UT, USA, 24–26 July 2006; pp. 133–137. [Google Scholar] [CrossRef]
  129. Gao, X.Z.; Ovaska, S.J.; Wang, X.; Chow, M. Clonal Optimization of Negative Selection Algorithm with Applications in Motor Fault Detection. In Proceedings of the 2006 IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan, 8–11 October 2006; Volume 6, pp. 5118–5123. [Google Scholar] [CrossRef]
  130. Cayzer, S.; Smith, J. Gene Libraries: Coverage, Efficiency and Diversity. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; pp. 136–149. [Google Scholar] [CrossRef]
  131. Gomez, J.; Gonzalez, F.; Dasgupta, D. An immuno-fuzzy approach to anomaly detection. In Proceedings of the 12th IEEE International Conference on Fuzzy Systems, 2003, FUZZ ’03, St. Louis, MS, USA, 25–28 May 2003; Volume 2, pp. 1219–1224. [Google Scholar] [CrossRef] [Green Version]
  132. Gonzalez, F.; Gomez, J.; Madhavi, K.; Dipankar, D. An evolutionary approach to generate fuzzy anomaly (attack) signatures. In Proceedings of the IEEE Systems, Man and Cybernetics SocietyInformation Assurance Workshop, New York, NY, USA, 18–20 June 2003; pp. 251–259. [Google Scholar] [CrossRef]
  133. Esponda, F.; Forrest, S.; Helman, P. A Formal Framework for Positive and Negative Detection Schemes. Trans. Sys. Man Cybern. Part B 2004, 34, 357–373. [Google Scholar] [CrossRef] [Green Version]
  134. Hang, X.; Dai, H. Applying Both Positive and Negative Selection to Supervised Learning for Anomaly Detection. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, GECCO ’05, Boston, MA, USA, 9–13 July 2005; ACM: New York, NY, USA, 2005; pp. 345–352. [Google Scholar] [CrossRef] [Green Version]
  135. Shamshirband, S.; Anuar, N.B.; Kiah, M.L.M.; Rohani, V.A.; Petković, D.; Misra, S.; Khan, A.N. Co-FAIS: Cooperative fuzzy artificial immune system for detecting intrusion in wireless sensor networks. J. Netw. Comput. Appl. 2014, 42, 102–117. [Google Scholar] [CrossRef]
  136. de Castro, P.A.D.; Zuben, F.J.V. BAIS: A Bayesian Artificial Immune System for the effective handling of building blocks. Inf. Sci. 2009, 179, 1426–1440. [Google Scholar] [CrossRef]
  137. Castro, P.A.D.; Zuben, F.J.V. MOBAIS: A Bayesian Artificial Immune System for Multi-Objective Optimization. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; pp. 48–59. [Google Scholar] [CrossRef]
  138. Wang, W.; Gao, S.; Tang, Z. A Complex Artificial Immune System. In Proceedings of the 2008 Fourth International Conference on Natural Computation, Jinan, China, 18–20 October 2008; Volume 6, pp. 597–601. [Google Scholar] [CrossRef]
  139. Dasgupta, D.; Forrest, S. An Anomaly Entection Algorithm Inspired by the Immune Syste. In Artificial Immune Systems and Their Applications; Springer: Berlin/Heidelberg, Germany, 1999; pp. 262–277. [Google Scholar] [CrossRef]
  140. Tyrell, A.M. Computer know thy self!: A biological way to look at fault-tolerance. In Proceedings of the 25th EUROMICRO Conference, Informatics: Theory and Practice for the New Millennium, Milan, Italy, 8–10 September 1999; Volume 2, pp. 129–135. [Google Scholar] [CrossRef]
  141. Coello Coello, C.A.; Cruz Cortes, N. A parallel implementation of an artificial immune system to handle constraints in genetic algorithms: Preliminary results. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No.02TH8600), Honolulu, HI, USA, 12–17 May 2002; Volume 1, pp. 819–824. [Google Scholar] [CrossRef]
  142. Liu, S.; Li, T.; Wang, D.; Zhao, K.; Gong, X.; Hu, X.; Xu, C.; Liang, G. Immune Multi-agent Active Defense Model for Network Intrusion. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; pp. 104–111. [Google Scholar] [CrossRef]
  143. Dasgupta, D. Immunity-Based Intrusion Detection System: A General Framework; Technical Report; The University of Memphis: Memphis, TN, USA, 1999. [Google Scholar]
  144. Ji, Z.; Dasgupta, D. Revisiting Negative Selection Algorithms. Evol. Comput. 2007, 15, 223–251. [Google Scholar] [CrossRef]
  145. Castro, L.N.D.; Zuben, F.J.V. The Clonal Selection Algorithm with Engineering Applications. In Proceedings of the GECCO 2002, Workshop, New York, NY, USA, 9–13 July 2002; Morgan Kaufmann: New York, NY, USA, 2002; pp. 36–37. [Google Scholar]
  146. Watkins, A.; Timmis, J. Exploiting Parallelism Inherent in AIRS, an Artificial Immune Classifier. In Artificial Immune Systems; Nicosia, G., Cutello, V., Bentley, P.J., Timmis, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 427–438. [Google Scholar]
  147. Alaparthy, V.T.; Amouri, A.; Morgera, S.D. A Study on the Adaptability of Immune models for Wireless Sensor Network Security. Procedia Comput. Sci. 2018, 145, 13–19. [Google Scholar] [CrossRef]
  148. Ciccazzo, A.; Conca, P.; Nicosia, G.; Stracquadanio, G. An Advanced Clonal Selection Algorithm with Ad-Hoc Network-Based Hypermutation Operators for Synthesis of Topology and Sizing of Analog Electrical Circuits. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; pp. 60–70. [Google Scholar] [CrossRef]
  149. Timmis, J.; Neal, M. A resource limited artificial immune system for data analysis. Knowl.-Based Syst. 2001, 14, 121–130. [Google Scholar] [CrossRef]
  150. Watkins, A.; Timmis, J.; Boggess, L. Artificial Immune Recognition System (AIRS): An Immune-Inspired Supervised Learning Algorithm. Genet. Program. Evolvable Mach. 2004, 5, 291–317. [Google Scholar] [CrossRef] [Green Version]
  151. Goodman, D.E.; Boggess, L.; Watkins, A. An investigation into the source of power for AIRS, an artificial immune classification system. In Proceedings of the International Joint Conference on Neural Networks, Portland, OR, USA, 20–24 July 2003; Volume 3, pp. 1678–1683. [Google Scholar] [CrossRef]
  152. Fang, L.; Bo, Q.; Rongsheng, C. Intrusion Detection Based on Immune Clonal Selection Algorithms. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; pp. 1226–1232. [Google Scholar] [CrossRef]
  153. Ishida, Y. Fully distributed diagnosis by PDP learning algorithm: Towards immune network PDP model. In Proceedings of the 1990 IJCNN International Joint Conference on Neural Networks, San Diego, CA, USA, 17–21 June 1990; pp. 777–782. [Google Scholar] [CrossRef]
  154. Hunt, J.E.; Cooke, D.E. Learning using an artificial immune system. J. Netw. Comput. Appl. 1996, 19, 189–212. [Google Scholar] [CrossRef]
  155. de Castro, L.N.; Zuben, F.J.V. aiNet: An Artificial Immune Network for Data Analysis. In Data Mining; IGI Global: Hershey, PA, USA, 2001; pp. 231–260. [Google Scholar] [CrossRef]
  156. Zhang, C.; Yi, Z. An Artificial Immune Network Model Applied to Data Clustering and Classification. In Advances in Neural Networks—ISNN 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 526–533. [Google Scholar] [CrossRef]
  157. Timmis, J.; Neal, M.; Hunt, J. An artificial immune system for data analysis. Biosystems 2000, 55, 143–150. [Google Scholar] [CrossRef]
  158. Greensmith, J.; Twycross, J.; Aickelin, U. Dendritic Cells for Anomaly Detection. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006. [Google Scholar] [CrossRef] [Green Version]
  159. Twycross, J. Integrated Innate and Adaptive Artificial Immune Systems Applied to Process Anomaly Detection. Ph.D. Thesis, University of Nottingham, Nottingham, UK, 2007. [Google Scholar]
  160. Aickelin, U.; Greensmith, J. Sensing danger: Innate immunology for intrusion detection. Inf. Secur. Tech. Rep. 2007, 12, 218–227. [Google Scholar] [CrossRef] [Green Version]
  161. Greensmith, J.; Aickelin, U. Dendritic Cells for Real-Time Anomaly Detection. SSRN Electr. J. 2006. [Google Scholar] [CrossRef] [Green Version]
  162. Greensmith, J. Migration Threshold Tuning in the Deterministic Dendritic Cell Algorithm. In Proceedings of the 8th International Conference on the Theory and Practice of Natural Computing (TPNC’19), Kingston, ON, Canada, 9–11 December 2019; Volume 2. [Google Scholar]
  163. Pinto, R.; Gonçalves, G.; Tovar, E.; Delsing, J. Attack Detection in Cyber-Physical Production Systems using the Deterministic Dendritic Cell Algorithm. In Proceedings of the 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vienna, Austria, 8–11 September 2020. [Google Scholar] [CrossRef]
  164. Oates, R.; Kendall, G.; Garibaldi, J.M. Frequency analysis for dendritic cell population tuning. Evol. Intell. 2008, 1, 145–157. [Google Scholar] [CrossRef] [Green Version]
  165. Gu, F.; Greensmith, J.; Aickelin, U. Integrating Real-Time Analysis with the Dendritic Cell Algorithm through Segmentation. In Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation, GECCO ’09, Montreal, QC, Canada, 8–12 July 2009; Association for Computing Machinery: New York, NY, USA, 2009. [Google Scholar] [CrossRef]
  166. Musselle, C.J. Insights into the Antigen Sampling Component of the Dendritic Cell Algorithm. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef]
  167. Aldhaheri, S.; Alghazzawi, D.; Cheng, L.; Barnawi, A.; Alzahrani, B.A. Artificial Immune Systems approaches to secure the internet of things: A systematic review of the literature and recommendations for future research. J. Netw. Comput. Appl. 2020, 157. [Google Scholar] [CrossRef]
  168. Ou, C.M.; Ou, C.R.; Wang, Y.T. Agent-Based Artificial Immune Systems (ABAIS) for Intrusion Detections: Inspiration from Danger Theory. In Agent and Multi-Agent Systems in Distributed Systems—Digital Economy and E-Commerce; Springer: Berlin/Heidelberg, Germany, 2013; pp. 67–94. [Google Scholar] [CrossRef]
  169. Salvato, M.; De Vito, S.; Guerra, S.; Buonanno, A.; Fattoruso, G.; Di Francia, G. An adaptive immune based anomaly detection algorithm for smart WSN deployments. In Proceedings of the 2015 XVIII AISEM Annual Conference, Trento, Italy, 3–5 February 2015; pp. 1–5. [Google Scholar] [CrossRef]
  170. Chao, R.; Tan, Y. A Virus Detection System Based on Artificial Immune System. In Proceedings of the 2009 International Conference on Computational Intelligence and Security, Beijing, China, 11–14 December 2009; Volume 1, pp. 6–10. [Google Scholar] [CrossRef]
  171. Tan, Y.; Mi, G.; Zhu, Y.; Deng, C. Artificial immune system based methods for spam filtering. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013; pp. 2484–2488. [Google Scholar] [CrossRef]
  172. Gadi, M.F.A.; Wang, X.; do Lago, A.P. Credit Card Fraud Detection with Artificial Immune System. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; pp. 119–131. [Google Scholar] [CrossRef]
  173. Boukerche, A.; Machado, R.B.; Jucá, K.R.; Sobral, J.B.M.; Notare, M.S. An agent based and biological inspired real-time intrusion detection and security model for computer network operations. Comput. Commun. 2007, 30, 2649–2660, Sensor-Actuated Networks. [Google Scholar] [CrossRef]
  174. Drozda, M.; Schaust, S.; Szczerbicka, H. AIS for misbehavior detection in wireless sensor networks: Performance and design principles. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 3719–3726. [Google Scholar] [CrossRef] [Green Version]
  175. Powers, S.T.; He, J. A hybrid artificial immune system and Self Organising Map for network intrusion detection. Inf. Sci. 2008, 178, 3024–3042. [Google Scholar] [CrossRef]
  176. Yang, L.; Yang, L.; Fengqi, Y. Immunity-based intrusion detection for wireless sensor networks. In Proceedings of the 2008 IEEE World Congress on Computational Intelligence, Hong Kong, China, 1–6 June 2008; pp. 439–444. [Google Scholar] [CrossRef]
  177. Yang, H.; Elhadef, M.; Nayak, A.; Yang, X. Network Fault Diagnosis: An Artificial Immune System Approach. In Proceedings of the 2008 14th IEEE International Conference on Parallel and Distributed Systems, Melbourne, VIC, Australia, 8–10 December 2008; pp. 463–469. [Google Scholar] [CrossRef]
  178. Laurentys, C.; Palhares, R.; Caminhas, W. A novel Artificial Immune System for fault behavior detection. Expert Syst. Appl. 2011, 38, 6957–6966. [Google Scholar] [CrossRef] [Green Version]
  179. Xiao, Y.; Wang, W.; Fang, D.; Gao, H.; Chen, X.; Zeng, Y.; Liu, B. A survival condition model of earthen sites based on the danger theory. In Proceedings of the 11th International Conference on Natural Computation (ICNC), Zhangjiajie, China, 15–17 August 2015; pp. 354–362. [Google Scholar] [CrossRef]
  180. Rizwan, R.; Khan, F.A.; Abbas, H.; Chauhdary, S.H. Anomaly Detection in Wireless Sensor Networks Using Immune-Based Bioinspired Mechanism. Int. J. Distrib. Sens. Netw. 2015, 11, 84952. [Google Scholar] [CrossRef]
  181. Cui, D.; Zhang, Q.; Xiong, J.; Li, Q.; Liu, M. Fault diagnosis research of rotating machinery based on Dendritic Cell Algorithm. In Proceedings of the IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015. [Google Scholar] [CrossRef]
  182. Mohapatra, S.; Khilar, P.M. Artificial immune system based fault diagnosis in large wireless sensor network topology. In Proceedings of the TENCON 2017—2017 IEEE Region 10 Conference, Penang, Malaysia, 5–8 November 2017; pp. 2687–2692. [Google Scholar] [CrossRef]
  183. Sun, Z.; Xu, Y.; Liang, G.; Zhou, Z. An Intrusion Detection Model for Wireless Sensor Networks With an Improved V-Detector Algorithm. IEEE Sens. J. 2018, 18, 1971–1984. [Google Scholar] [CrossRef]
  184. Li, W.; Cai, X. Intelligent Immune System for Sustainable Manufacturing. In Proceedings of the 2018 IEEE 22nd International Conference on Computer Supported Cooperative Work in Design ((CSCWD)), Nanjing, China, 9–11 May 2018; pp. 190–195. [Google Scholar] [CrossRef]
  185. Alizadeh, E.; Meskin, N.; Khorasani, K. A Dendritic Cell Immune System Inspired Scheme for Sensor Fault Detection and Isolation of Wind Turbines. IEEE Trans. Ind. Inform. 2018, 14, 545–555. [Google Scholar] [CrossRef]
  186. Akram, M.; Raza, A. Towards the development of robot immune system: A combined approach involving innate immune cells and T-lymphocytes. Biosystems 2018, 172, 52–67. [Google Scholar] [CrossRef] [PubMed]
  187. Aldhaheri, S.; Alghazzawi, D.; Cheng, L.; Alzahrani, B.; Al-Barakati, A. DeepDCA: Novel Network-Based Detection of IoT Attacks Using Artificial Immune System. Appl. Sci. 2020, 10, 1909. [Google Scholar] [CrossRef] [Green Version]
  188. Bejoy, B.; Raju, G.; Swain, D.; Acharya, B.; Hu, Y.C. A generic cyber immune framework for anomaly detection using artificial immune systems. Appl. Soft Comput. 2022, 130, 109680. [Google Scholar] [CrossRef]
  189. Dasgupta, D. An Overview of Artificial Immune Systems and Their Applications. In Artificial Immune Systems and Their Applications; Springer: Berlin/Heidelberg, Germany, 1993; pp. 3–21. [Google Scholar] [CrossRef]
  190. Kim, J.; Bentley, P. An Artificial Immune Model for Network Intrusion Detection. In Proceedings of the Conference on Intelligent Techniques and Soft Computing (EUFIT’99), Aachen, Germany, 13–16 September 1999. [Google Scholar]
  191. Shafi, K.; Abbass, H.A. Biologically-inspired Complex Adaptive Systems approaches to Network Intrusion Detection. Inf. Secur. Tech. Rep. 2007, 12, 209–217. [Google Scholar] [CrossRef]
  192. Fernandes, D.A.; Freire, M.M.; Fazendeiro, P.A.; Inácio, P.R. Applications of artificial immune systems to computer security: A survey. J. Inf. Secur. Appl. 2017, 35, 138–159. [Google Scholar] [CrossRef]
  193. Naik, B.; Mehta, A.; Yagnik, H.; Shah, M. The impacts of artificial intelligence techniques in augmentation of cybersecurity: A comprehensive review. Complex Intell. Syst. 2021, 8, 1763–1780. [Google Scholar] [CrossRef]
  194. Fasanotti, L.; Dovere, E.; Cagnoni, E.; Cavalieri, S. An Application of Artificial Immune System in a Wastewater Treatment Plant. IFAC-PapersOnLine 2016, 49, 55–60. [Google Scholar] [CrossRef]
  195. Gong, M.; Jiao, L.; Ma, W.; Ma, J. Intelligent multi-user detection using an artificial immune system. Sci. China Ser. F Inf. Sci. 2009, 52, 2342–2353. [Google Scholar] [CrossRef]
  196. Zuccolotto, M.; Pereira, C.E.; Fasanotti, L.; Cavalieri, S.; Lee, J. Designing an Artificial Immune Systems for Intelligent Maintenance Systems. IFAC-PapersOnLine 2015, 48, 1451–1456. [Google Scholar] [CrossRef]
  197. Bradley, D.W.; Tyrrell, A.M. The architecture for a hardware immune system. In Proceedings of the 3rd NASA/DoD Workshop on Evolvable Hardware, EH-2001, Long Beach, CA, USA, 12–14 July 2001; pp. 193–200. [Google Scholar] [CrossRef] [Green Version]
  198. Kayama, M.; Sugita, Y.; Morooka, Y.; Fukuoka, S. Distributed diagnosis system combining the immune network and learning vector quantization. In Proceedings of the IECON ’95—21st Annual Conference on IEEE Industrial Electronics, Orlando, FL, USA, 6–10 November 1995; Volume 2, pp. 1531–1536. [Google Scholar] [CrossRef]
  199. Liu, W.; Chen, B. Optimal control of mobile monitoring agents in immune-inspired wireless monitoring networks. J. Netw. Comput. Appl. 2011, 34, 1818–1826. [Google Scholar] [CrossRef]
  200. Mohapatra, S.; Khilar, P.M. Immune Inspired Fault Diagnosis in Wireless Sensor Network. In Nature Inspired Computing for Wireless Sensor Networks; Springer: Singapore, 2020; Chapter 5. [Google Scholar] [CrossRef]
  201. Chelly, Z.; Elouedi, Z. A survey of the dendritic cell algorithm. Knowl. Inf. Syst. 2015, 48, 505–535. [Google Scholar] [CrossRef]
  202. Sarafijanović, S.; Le Boudec, J.Y. An Artificial Immune System for Misbehavior Detection in Mobile Ad-Hoc Networks with Virtual Thymus, Clustering, Danger Signal, and Memory Detectors. In Artificial Immune Systems, Proceedings of the Third International Conference, ICARIS 2004, Catania, Sicily, Italy, 13–16 September 2004; Nicosia, G., Cutello, V., Bentley, P.J., Timmis, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 342–356. [Google Scholar]
  203. Widhalm, D.; Goeschka, K.M.; Kastner, W. Node-level indicators of soft faults in wireless sensor networks. In Proceedings of the 40th International Symposium on Reliable Distributed Systems (SRDS ’21), Chicago, IL, USA, 21–24 September 2021; pp. 13–22. [Google Scholar] [CrossRef]
  204. Elisa, N.; Yang, L.; Fu, X.; Naik, N. Dendritic Cell Algorithm Enhancement Using Fuzzy Inference System for Network Intrusion Detection. In Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), New Orleans, LA, USA, 26–26 June 2019. [Google Scholar] [CrossRef]
Figure 1. A history of immunological models (after [25], Figure 1). (a) SNS (1959); (b) Two-signal model (1969); (c) Extended two-signal model (1975); (d) INS (1989); (e) Danger theory (1994).
Figure 1. A history of immunological models (after [25], Figure 1). (a) SNS (1959); (b) Two-signal model (1969); (c) Extended two-signal model (1975); (d) INS (1989); (e) Danger theory (1994).
Sensors 23 01166 g001
Figure 2. Classification of the white blood cells (adapted taken from [73], Figure 2).
Figure 2. Classification of the white blood cells (adapted taken from [73], Figure 2).
Sensors 23 01166 g002
Figure 3. Antigen responses of different immune theories (after [25], Figure 2).
Figure 3. Antigen responses of different immune theories (after [25], Figure 2).
Sensors 23 01166 g003
Figure 4. Key features of DC biology used in the DCA (after [84], Figure 5.3).
Figure 4. Key features of DC biology used in the DCA (after [84], Figure 5.3).
Sensors 23 01166 g004
Figure 5. Abstract model of the DCA signal processing (after [84], Figure 5.4).
Figure 5. Abstract model of the DCA signal processing (after [84], Figure 5.4).
Sensors 23 01166 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Widhalm , D.; Goeschka , K.M.; Kastner , W. A Review on Immune-Inspired Node Fault Detection in Wireless Sensor Networks with a Focus on the Danger Theory. Sensors 2023, 23, 1166. https://doi.org/10.3390/s23031166

AMA Style

Widhalm  D, Goeschka  KM, Kastner  W. A Review on Immune-Inspired Node Fault Detection in Wireless Sensor Networks with a Focus on the Danger Theory. Sensors. 2023; 23(3):1166. https://doi.org/10.3390/s23031166

Chicago/Turabian Style

Widhalm , Dominik, Karl M. Goeschka , and Wolfgang Kastner . 2023. "A Review on Immune-Inspired Node Fault Detection in Wireless Sensor Networks with a Focus on the Danger Theory" Sensors 23, no. 3: 1166. https://doi.org/10.3390/s23031166

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop