Next Article in Journal / Special Issue
Enif-Lang: A Specialized Language for Programming Network Functions on Commodity Hardware
Previous Article in Journal
Outage Analysis of Distributed Antenna Systems in a Composite Fading Channel with Correlated Shadowing
Open AccessFeature PaperArticle

Software Defined Networks in Industrial Automation

1
Discipline of IT, College of Engineering & Science, Victoria University, Footscray Park Campus, Melbourne, VIC 3011, Australia
2
School of Science (Computer Science), RMIT University, Melbourne, VIC 3001, Australia
3
School of Engineering (Electrical and Computer Systems), RMIT University, Melbourne, VIC 3001, Australia
*
Author to whom correspondence should be addressed.
J. Sens. Actuator Netw. 2018, 7(3), 33; https://doi.org/10.3390/jsan7030033
Received: 8 June 2018 / Revised: 2 August 2018 / Accepted: 2 August 2018 / Published: 6 August 2018
(This article belongs to the Special Issue Softwarization at the Network Edge for the Tactile Internet)

Abstract

Trends such as the Industrial Internet of Things and Industry 4.0 have increased the need to use new and innovative network technologies in industrial automation. The growth of industrial automation communications is an outcome of the shift to harness the productivity and efficiency of manufacturing and process automation with a minimum of human intervention. Due to the ongoing evolution of industrial networks from Fieldbus technologies to Ethernet, a new opportunity has emerged to harness the benefits of Software Defined Networking (SDN). In this paper, we provide a brief overview of SDN in the industrial automation domain and propose a network architecture called the Software Defined Industrial Automation Network (SDIAN), with the objective of improving network scalability and efficiency. To match the specific considerations and requirements of having a deterministic system in an industrial network, we propose two solutions for flow creation: the Pro-active Flow Installation Scheme and the Hybrid Flow Installation Scheme. We analytically quantify the proposed solutions that alleviate the overhead incurred from the flow setup. The analytical model is verified using Monte Carlo simulations. We also evaluate the SDIAN architecture and analyze the network performance of the modified topology using the Mininet emulator. We further list and motivate SDIAN features and report on an experimental food processing plant demonstration featuring Raspberry Pi as a software-defined controller instead of traditional proprietary Programmable Logic Controllers. Our demonstration exemplifies the characteristics of SDIAN.
Keywords: controller; industry network; Open Flow; Software Defined Networking; Programmable Logic Controller controller; industry network; Open Flow; Software Defined Networking; Programmable Logic Controller

1. Introduction

Networking large automated machines is a recent focus for industrial automation and one challenge is the connectivity with traditional automation machinery that is not designed to support more than local computer connectivity. Industrial networks can be highly decentralized, rigid and complex to manage due to the tight coupling of the automation data and control plane that is often embedded within the equipment. The computing and communication nodes are often configured individually when the plant is setup and interconnections remain static thereafter. The traditional industrial communications hierarchical structure consists of three network levels with various networking technologies and protocols that limits what can be achieved and adds complexity due to localized configuration. The traditional structure requires offline manual network control and management, which is time-consuming, error-prone and introduces complexity. It hinders the ability to make live changes to the configuration and feature set as the production line is shifted from one task to another. The resolution of medium access control (MAC) address and virtual routing address during data forwarding in an industrial network can lead to challenges, including integrating software and devices from different vendors.
Legacy industrial communications is a challenge to be overcome as part of the fourth-generation industry revolution (FGIR). FGIR is underpinned by the principle of intelligent manufacturing (IM) enabling customized production. To ensure that a smooth transition occurs between production tasks, IM aims to reconstruct the industrial plant by decoupling the manufacturing entities. To attain optimal production, a goal of FGIR is to utilize live monitoring of machine status, environmental values and manufacturing parameters to carry out advanced management, control and fault detection. The outcomes of FGIR will assist with maintenance scheduling to reduce downtime. The future industrial network will connect a varying range of industrial machinery within one or more locations that could change over time. To facilitate FGIR, the current heterogeneous hierarchical localized network structure should be replaced with IP-based networking to provide flexible real-time communications and simplified data mapping. There is also a requirement to change the configuration of the industrial machines and production systems as the production tasks change. It is in this context that future industrial facility networks should embrace Software-Defined Networks (SDNs) to provide flexible programmatic capabilities. The research gap that this paper addresses is the introduction of SDN and IP-based networking into an industrial automation setting to provide flexibility and programmability while maintaining the features and capabilities expected for a real-time communications environment.

1.1. Software Defined Network

SDNs [1,2,3] separate the networks control logic (the control plane) from the underlying routers and switches that forward the traffic (the data plane). With the separation of the control and data planes, network switches become simple forwarding devices, and the control logic is implemented in a logically centralized controller, simplifying policy enforcement, and network (re)configuration and evolution [4]. Therefore, the most promising and possibly profitable benefit of SDNs is their potential in making the network directly programmable. SDNs become a hot topic at within cloud and enterprise networks in about 2010. To our knowledge, SDN solutions are new to the industrial automation domain. SDNs permit reusable configurations and designs that improve system performance. SDNs complement and build on technologies such as industrial Ethernet [5,6,7], wireless technologies [8,9], and network technologies with guaranteed timing behavior for real-time (e.g., [10]) communication. SDNs can be characterized by: (1) decoupling the control plane from the data plane within network devices; (2) providing programmability for network services; (3) taking forwarding decisions based on flow instead of destination; (4) hosting control logic in an external network component called controller or Network Operating System (NOS); and (5) running software applications on top of the NOS to interact with the underlying data plane devices. With the realization of the aforementioned characteristics of SDNs, the current “touch many, configure many” model is being evolved into “touch one, configure many” [11].

1.2. Brief History of Industrial Networks

Dedicated industry networks, e.g., Fieldbus System, dominated the early days of industrial automation. The reduction of the communication gap in the lower level of the automation pyramid was the essence of the dedicated communication infrastructure. However, the complexity of coupling different communication technologies and protocols used across different communication layers was one of the fundamental motives to adopt a solution. Table 1 presents the timeline of the progression of industrial automation networks with unsustainable disruptions that come from the evolution of computer networks. In the 2000s, the Internet technologies evolved and became commercially successful raising the possibility of plausible disruption with the inclusion of Ethernet-based networks and IP. However, due to the lack of guaranteed real-time capabilities, the phenomena of having Ethernet-based industry networks did not occur and the emergence of dedicated industry networks continued. Later some of the Ethernet-based approaches including Powerlink, PROFINET, EtherCAT, to name a few, emerged to meet the low-latency requirements, in particular, for motion control applications. In the early 2000s, network evolution occurred with the integration of wireless networking. The IEEE 802 protocol family was aggressively adopted to realize the flexibility afforded by connecting machines and devices wirelessly. The typical use of wireless networks in the automation industry was limited due to the need for wired networks to provide reliable real-time communications. We have yet to see the full use of Wireless Sensor Networks (WSN) in industry automation though it is now a mature technology.
Until recently, industrial communication was a mixture of Fieldbus, Ethernet and wireless solutions that has become complex, difficult to upgrade or change and remains a challenge to be overcome before industrial automation can take a significant step forward. New networking approaches that have evolved include the Internet of Things (IoT) and Cyber-Physical Systems (CPS), both of which should find a place within future industrial automation solutions. The idea behind CPS being used in industrial automation is to create an industrial ecosystem allowing more comprehensive and more fine-grained interconnections between machines and systems. Moving business logic into the cloud is a promising trend in the application layer of the information processing pyramid. There are two well-known reference architectures for industrial IoT including the Reference Architecture Model for Industry 4.0 (RAMI4.0) [12] and the Industrial Internet Reference Architecture (IIRA) [13]. RAMI 4.0 uses three dimensions including the lifecycle, physical world and the mapping of IT-based business models in describing the space of the fourth industrial revolution. Some of the leading industry sector companies based in Germany initiated and are driving RAMI 4.0. On the other hand, the Industrial Internet Consortium developed IIRA in the U.S. IIRA focuses on four different viewpoints including functional, usage, business, and implementation.

1.3. SDNs in Industrial Automation

In transitioning to a software-defined network, the key challenges involve changing the traditional practices in industrial automation on the factory floor [14,15]. That means providing relevant employees with the tools and knowledge to support new, more intelligent infrastructure and systems.
Cronberger [16] and Kalman et al. [17] first considered and discussed the use of SDN in industrial automation networks. Cronberger investigated the potential of SDN through a conceptual framework whereas Kalman et al. saw SDN as a possible evolution for future industrial Ethernet planning and extensions towards using Layer 3 networks and wireless solutions. In 2015, we first proposed the integration of SDN in industrial automation by reforming the current industry communication pyramid to become a single Ethernet-based solution in a conceptual paper [14]. In [18], the authors proposed an application-aware industrial Ethernet by exploiting the capabilities of SDN in collecting topology information and application requirements. A newly developed routing and scheduling algorithm uses the received information to generate network configuration autonomously. This configuration is later installed in the network through north and southbound communication, and an enhanced TDMA approach is used to facilitate real-time communication. D. Li et al. [15] proposed a single IP-based solution that can respond to the dynamic change of product orders by adaptively reconfiguring the networks. The architecture promised to guarantee real-time data transmission, enable plug-and-play, and support wireless access with seamless handover capability. In [19] the authors reviewed SDN to draw a correlation between the requirement of the industrial network and existing work. In [20] we continue the evaluation of SDN for future industrial automation networks. This work extended the Ryu controller for direct multicast routing of industrial traffic in a cyclic switched Ethernet network setup. The experiment was conducted in an IEC 61499 compliant development environment. The experiment result shows that there is a promising opportunity to have a flexible and reliable network that is also suitable for real-time traffic. Table 2 summarizes the current state of the art of SDN in industrial automation.

1.4. Contributions

The contribution of this paper can be summarized as
  • We investigate the research gap that exists for IP-based networking in industrial automation and introduce a novel industrial network framework based on an SDN communication architecture.
  • We propose two solutions for flow creation in relieving the incurred overhead due to the flow setup cost in SDN.
  • We render an optimal latency model based on a meticulous flow analysis using L 1 -Norm Optimization to calculate the shortest path. It verifies the quantified model using a Monte Carlo simulation.
  • We validate the proposed scheme by running an experiment in an emulated environment using Mininet [22].
  • We exploit the merits of the proposed framework by presenting an ongoing test bed implementation. The investigation is conducted on a food processing demonstrator.

1.5. Paper Organization

The remainder of this paper is organized as follows. Section 2 presents the architecture, communication framework and flow creation of SDIAN. In Section 3, we examine the flow analysis and present an optimal latency model of the proposed solution. Section 4 exhibits the stochastic analysis of the model formulated in Section 3. In Section 5, the network performance of the target mesh topology is shown using a modelled emulation scenario and a report on the experimental setup in a food processing plant demonstrator is presented. Finally, Section 6 concludes the paper. This manuscript is the extended version of the paper presented in [21].

2. Architecture and Framework

In this section, we first introduce the architecture and three-layer SDIAN framework. Then we describe our proposed flow installation scheme.

2.1. System Model

In this section, we present the conceptual architecture of the proposed SDIAN and the packet dissemination model with the plant components. Figure 1 shows the remolded version of a standard plant hierarchy that incorporates SDN features and builds an intelligent industrial automation network. In this transformed architecture, within the three hierarchical levels (Control Plane, Plant Level and Field Level), traditional proprietary Programmable Logic Controllers (PLCs) are replaced with the open Raspberry Pi (RPi) systems running the Rasbian operating system, a Linux flavor, and using open-source language for software-defined automation control. Sensors and actuators are interfaced with field level RPis, except the direct I/Os, which are interfaced directly within the plant level hierarchy. A script running on the RPi-based PLCs can receive and send interrupts from the sensors and to the actuators through I/O pins. The scripts written for the RPis replicate the behavior of traditional PLCs. The data layer communication is illustrated using group-1 messages (1A–1E) shown in Figure 1. In this scenario, when an object is detected on the conveyor belt, RPI-PL-1 receives an interrupt and invokes the robotic arm via a reply interrupt. This interrupt is sent through the output pin. In this case, the response of the arm is to deliver the object to another conveyor belt within a limited time constraint. Likewise, group 2 messages (2A–2C) are used to present the control layer communication. In this scenario, a remote SDIAN administrator updates control applications deployed on the controller. After receiving updates, the controller adaptively pushes the information to the associated RPis. Based on the updated instructions received, RPis update the data plane behavior accordingly.

2.2. SDIAN Communication Framework

Figure 2 shows the three-layer SDIAN communication framework. Sensors, actuators, and RPis reside in the data plane, while the logically centralized but physically distributed controllers reside in the control plane. RPis are responsible for receiving packets from sensors and instruct the corresponding actuators to take actions based on the respective flow retrieved from the flow table or corresponding controller. In this framework, RPis are connected through a mesh topology. We deliberately use a mesh topology to map the requirements of the food processing plant, which is presented in Section 5. In the case of a flow table miss [23], an RPi sends a Packet-In message to the controller sitting in the control plane. After getting the Packet-In message, the controller instructs the RPi by sending a Packet-Out/Flow-MOD message. This communication between data plane and control plane happens through the southbound interface (SBI) of the control plane. A task or application is created in the application (also called service management-control) plane that explicitly uses northbound interface (NBI) to translate the business use case, network requirements and, behavior programmatically and logically to the controller. The users are responsible for defining the attributes of a task. Table 3 presents the summary of the different components of the SDIAN architecture.

2.3. Creating Flows

Unlike other networks, industrial networking environments have specific considerations and requirements to fabricate a deterministic system. These include—real-time network performance, remote access, onsite security, reliability, and ease of use features and manageability. The unique features, when compared to other communication environments, represent significant disparities and pose both challenges and opportunities when implementing SDN-based industrial Ethernet infrastructure. By the inclusion of SDN, there is an inherent opportunity to resolve the reliability, manageability and ease of use issues that are a challenge to achieving real-time performance. Due to the fundamental hardware attributes of switch and software implementation inefficiencies, the latency of flow installations is higher than in traditional network installations. In the case of a flow table miss, there is a higher latency to resolve what should be done with the first packet. From the empirical study provided in [22], it was identified that the root causes of this high latency are as follows: (a) outbound latency, i.e., the latency incurred due to the installation/modification/deletion of forwarding rules, (b) inbound latency, i.e., the latency to send packet events to the controller can be high, in particular, when the switch simultaneously processes forwarding rules received from the controller.
We provide two solutions for flow creation, from which the network administrators can determine the appropriate flow mapping based on their predilection and the application requirements. In the first solution, we use the innovative idea of mixing reactive and pro-active flow installation methods. This is referred to as a Hybrid Flow Installation Scheme (HFIS). With HFIS we cater for non-real-time traffic, in other words, delay tolerant traffic. We use two immediately deployable techniques: Flow Engineering (FE) and Rule Offload (RO). When a switch in the control-level network of a plant receives a packet from control and monitoring devices, it starts by performing a table lookup in the flow table. If a match is found with a flow table entry, it applies the action set associated with the flow as per the Open Flow 1.3 specification [22]. In the case of a table miss, when the controller receives a Packet-In message, it first calculates the shortest route (FE) to reach the destination and then sends the respective Packet-Out/Flow-Mod messages to all switches across this route (RO). Therefore, the packet transmission latency is increased by only one inbound and outbound event irrespective of the number of relay nodes it goes through before it reaches the destination.
The precise synchronization of processes underpins today’s manufacturing industry, and therefore, the network must be enhanced to ensure consistent real-time performance in transporting deterministic delay-sensitive traffic. Data must be prioritized based on QoS parameters to ensure that critical information is received first. To tackle this problem, in the second solution, we propose to use a Pro-active Flow Installation Scheme (PFIS) catering for delay-sensitive traffic by providing precise synchronization. In this case, we adopted the direct RO method. The controller sends the flow installation packet for all pre-determined critical delay-sensitive traffic to the switches immediately after switch discovery. This pre-installation happens during the convergence of the network. For further clarification, we present the SDIAN packet dissemination model in Figure 3 and Figure 4. In Figure 3, the packet exchange is classified into two categories—Non-Real-Time (NRT) communication and real-time (RT) communication. We apply HFIS for NRT and PFIS for RT. Figure 4 illustrates the working mechanism of PFIS. Please note that in the test bed implementation the data channel and control channel are separate, but for drawing simplification this is not portrayed in Figure 4. As shown in Figure 4a, the switch S1 receives a data packet from a field level device. For this packet, there is a table miss, therefore, the switch sends a control packet (Packet-In) request to the controller. Based on the header information, the controller determines the shortest path for this packet and responds with Packet-Out to all the intermediate switches along this path (Figure 4b). Therefore, as shown in Figure 4c, there is no further table miss as all the intermediate switches along the path pre-install the flow into the flow table before the packet arrives.

3. Flow Analysis

In this section, we first illustrate the basic notation used to represent the data layer of the control network of a plant. Since the control channel is separated from the data channel, we kept the graph representation of the control channel out of the scope of this paper and assumed that each switch could reach the controller in single hop fashion using a secured and fast directly connected control channel. Now, we formulate the shortest path routing as the flow optimization problems in a network that is realized by the controller based on the discovered topology. Finally, we compute the model for determining optimal latency to reach the destination.

3.1. Data Layer: Basic Notations

We represent our n-node data plane of the control network of a plant by an undirected graph, G = (S, L, X), where S = { s 1 ,   s 2 , , s n } is the set of switches, L is the set of links, and X is an n × n matrix defined by { x i j | ( i , j ) L } , where each (i, j)-th entry, denoted by x i j , represents the positive weight of a link (i, j) ∈ L. Due to the undirected nature of the graph, (i, j) and (j, i) designate the same link, i.e., x i j = x j i . When (i, j) ∉ L, delineate x i j = 0 fabricating the weight matrix X = = [ x i j ] into symmetric. We also define that X is a 0-1 matrix, i.e., all links have a unit weight, therefore, G refers to a simple graph and, X is the respective adjacency matrix.
Consider d = [ s 1 , s n ] ,   s 1 , s n S denotes the source-destination switch pair in the network G and F d : S × S R + function defines the amount of traffic ( f ( d ) - unit) that traverse from s 1 (source) to s n (destination) subject to the following constraints:
(1) along network links:
if   ( i ,   j ) L   then   F i j d = 0
(2) along one direction:
if   F i j d > 0   then   F j i d = 0
(3) at source s 1 :
f ( d ) + k = 1 n F k s 1 d = j = 1 n F s 1 j d
relay node i s 1 ,   s n :
j = 1 n F i j d = k = 1 n F k i d  
(4) at destination s n :
k = 1 n F k s n d = j = 1 n F s n j d + f ( d )  
The constraint in Equation (1) ensures that for each link (i, j) ∉ L, F i j d = 0 and in particular, for each undirected link (i, j) ∈ L, the constraint in Equation (2) says if   F i j d > 0   then   F j i d = 0 or if   F j i d > 0   then   F i j d = 0 . The traffic constraints defined in Equations (3)–(5) state that the amount of f ( d ) unit traffic sent by source s 1 is received by destination s n at the exact number. The amount of traffic entering and leaving a relay switch is same.
Considering a set of intermediate or relay switches S F ( d ) S and a corresponding subset of links L F ( d ) L to carry the given f ( d ) unit traffic from source s 1 to destination s n , we induce a directed (or oriented) sub-graph of G, G F ( d ) = ( S F ( d ) , L F ( d ) ) . G F ( d ) is a directed acyclic graph (DAG, we refer to it as a routing graph) that routes the traffic from source s 1 to destination s n . The traffic could split or merge across the nodes of G F ( d ) to travel across multiple paths. We define F d to refer the collection of flows, in other words, all functions that satisfy the constraints in Equations (1)–(5).
In the following subsection, we derive the shortest path routing strategy by minimizing L 1 -norm of traffic between a given source-destination pair. We build this model based on the fabrication of two well-known results [24,25] presented in [26].

Shortest Path Routing (L1-Norm Optimization)

For simplicity and clarity of notation, we assume that f ( d ) = 1 , F i j equivalently specifies the traffic function F ( d ) , s 1 = 1 , and s n = n . Therefore, we define the following L 1 -norm ( L 1 Primal) flow optimization problem that can be solved using linear programming (LP).
min F d i = 1 n j = 1 n x i j F i j s . t .   ( 1 ) ( 5 )
To comply with the constraints specified in Equations (1)–(5), (6) can more specifically be stated as
j : ( i , j ) L F i j k : ( k , i ) L F k i = { 1 if   i = s 1 0 if   i , j   s 1 , s n 1 if   i = s n
where, F i j 0 and 1 i , j n .
Hence, the optimization problem presented in (6) minimized the weighted L 1 -norm. Based on the flow conservation constraints presented in Equation (7), we consider the dual ( L 1 Dual) of (6) in terms of Lagrange multipliers ( U i ’s) to find the shortest path routing
max U U 1
Subject to,
U n = 0   and   U i U j x i j ,   i j L
Assuming F * and U * refer to the optimal traffic solution for the primal and dual problem respectively, we derive the following relations between F i j * s and U i * s
if   F i j * > 0 ,   then   U i * U j * = x i j
and
if   F i j * = 0 ,   then   U i * U j * < x i j
Based on these relations, we can define the following properties of the optimal solution ( U i * s ) of the dual problem.
Lemma 1. 
Let P 1 and P 2   (alternative to P 1 ) are two different paths from source ( s 1 ) to destination ( s n ) to carry the traffic. If for each link ( i ,   j ) P 1 , U i * U j * < x i j then P 1 is not the shortest path and U s 1 * < ( i , j ) P 1 x i j . On the other hand, the alternative path P 2 is a shortest path if for each link ( i ,   j ) P 2 , U i * U j * = x i j and U s 1 * = ( i , j ) P 2 x i j .
It is evident from the above Lemma that for any switch S i on a shortest path, U s i * is the shortest path distance from the switch S i to the destination S n . All intermediate switches including S i and S n are the elements of S F ( d ) * that form the shortest routing graph G F ( d ) * .

3.2. Optimal Latency Model: Hybrid

In this subsection, we derive the optimal latency model for HFIS. In HFIS, a packet traverses across the shortest path to reach the destination switch.
Let α denotes the total latency for a packet to reach from source S 1 to destination S n , α i n refers to the inbound latency, α o u is the outbound latency, α p S k is the single hop propagation delay of a packet travelling from S k to S k + 1 . We consider γ is the average time taken by a controller to process a Packet-In message and β is the control channel latency i.e., time taken by a Packet-In/Packet-Out message to travel between a switch and a controller. To this end, our target is to minimize the value of α , and therefore, the optimization model of latency can be stated as:
min α { α i n + α o u +   k = 1 m { α P S k } + 2 × β + γ }
where, m is the number of hops i.e., the total number of switches in the shortest routing graph G F ( d ) * and S k S F ( d ) * , where S F ( d ) * = { S i ,   S i + 1 ,   ,   S i + m } .
According to HFIS, only the first switch generates the packet event to the controller; then all switches, including the first switch, along the path receive and install the flow instruction. Therefore, there is only one inbound, one outbound, two control channels and one Packet-In resolution latency. Considering a consistent and deterministic link state and performance among all switches, we assume α P S k α P S k + 1 α P S k + 2 α P S k + m α p , where α p is the average propagation delay, therefore, we can rewrite Equation (12) as follows
min α { α i n + α o u +   m × α p + 2 × β + γ }
Lemma 2. 
During the lifetime of a packet, if it traverses across the shortest path, then latency α α p   , i.e., α = m × α p + K , where K = α i n + α o u + 2 × β + γ .
Lemma 2 asserts that in the entire journey of a packet, there is no more than one table miss regardless of the number of hops across the path. Therefore, one table miss generates only one Packet-In event incurring single inbound ( α i n ) and outbound ( α o u ) latency with the associated control channel ( β ) and Packet-In processing time by controller ( γ ) .

3.3. Optimal Latency Model: Pro-Active

According to the second solution, the controller will pro-actively offload the rule to all switches immediately after the deployment of an application. A network administrator deploys an application through the application plane. The application plane creates a particular flow and sends it to the controller in the control plane through the northbound interface. The controller then floods the flow across all the switches within the respective domain. The value of the associated Idle timeout and Hard timeout [23], in this case, are set to zero i.e., Flow entry is considered permanent, and it does not timeout unless it is removed with a flow table modification message of type OFPFC_DELETE [23]. When a switch receives a packet of this kind, the switch gets an obvious table match and therefore, apply the action accordingly. This pre-offloading of flows eventually eradicates the control channel communication entirely during the lifetime of a packet in the data plane.
Lemma 3. 
With the PFIS, if a packet travels across the shortest path, then the latency is calculated as α α p i.e., α = m × α p + K , where K α i n + α o u + 2 × β + γ   0 .
Lemma 3 asserts that in the entire journey of a packet, there is no table miss regardless of the number of hops across the path. Therefore, there is no Packet-In event, i.e., inbound ( α i n ) and outbound ( α o u ) latency with the associated control channel ( β ) and Packet-In processing time by the controller ( γ ) are equivalent to zero.

4. Stochastic Analysis of SDIAN

To validate the analytical approach presented in Section 3, we perform an extensive Monte Carlo simulation with 10,000 runs. In each run, we use a randomized distribution of inbound ( α i n ) and outbound ( α o u ) flow latency, control channel latency ( β ) , data channel latency ( α p ) and packet processing time ( γ ) by a controller. The distribution of α i n and α o u is fabricated from the outcome of a comprehensive measurement study [27,28] conducted using four types of production SDN switches. The distribution of inbound latency is a Chi-squared distribution attributed by a mean of 1.853 ms, a median of 0.71 ms and a standard deviation of 6.71 ms. The outbound delay is less variable and skewed with the same mean and median of 1.09 ms and a standard deviation of 0.18 ms. Assuming the simulation is running with a ten (10) switch control network for a single small-scale plant, the number of hops (m) in the shortest path calculation is varied between 1 and 10 and the distribution is a normal distribution with a mean of six (6) and standard deviation of three (3). The Round Trip Time (RTT) between two switches ( α p ) and between controller and switch ( β ) is negligible ( 0.1   ms ) . The influx rate of Packet-In messages from switches to the controller determines the time ( β ) taken by a controller to process a packet and therefore the distribution of ( β ) is a normal distribution with a mean of 5.49 μ s and standard deviation of 2.86 μ s .
Figure 5a–c respectively shows the histogram of the Monte Carlo simulation results of three flow installation schemes: hybrid, reactive and pro-active. The bin size in Figure 5a,b is 5 ms whereas in it is 0.035 ms . The ascendancy acquired by using HFIS and PFIS over the Reactive Flow Installation Scheme (RFIS) is discernible. 95% of packets are resolved within 3.28 ms using the HFIS and within 0.19 ms using the PFIS. Table 4 presents the summary simulation result statistics as shown in Figure 5.
To see the implication of Lemma 2, we repeat the simulation with the number of switches varying between ( 5 S 100 ). After performing all the Monte Carlo simulation runs, we average the results obtained for each value of S as shown in Figure 6. Figure 6a shows that with HFIS, the total flow installation latency ( K =   α i n + α o u + 2 × β + γ ) is constant irrespective of the network size; therefore, the total latency ( α ) is directly proportional to m × α p . Figure 6b presents the latency for RFIS. In the worst-case scenario for RFIS, each switch in a route could have packet flow table miss with the associated flow setup cost. Therefore, the total latency is dominated by the flow installation overhead ( α K , m α p K ). The PFIS latency results are shown in Figure 6c. Since the respective flow is installed across all switches before the arrival of any data packet, there is no table miss. As in the HFIS case α is directly proportional to m α p with α m α p and K 0 .

Discussion

The results presented in Figure 5 and Figure 6 and Table 4, highlight that the PFIS confers the lowest latency as the overhead from flow establishment is, in fact, close to zero. Regarding HFIS, the cost for flow setup is constant regardless of the network size. The upper and lower limits of the 95% Confidence Interval (CI) for HFIS in a network of ten (10) switches are 3.02 ms and 3.28 ms respectively, indicating a stable deterministic condition. For consistent RT performance in transporting deterministic delay-sensitive traffic, we can apply PFIS, while we can use HFIS to provision the rest of the traffic sustaining the dynamic behavior of the SDN network. In a nutshell, the latency bound for RFIS, HFIS and PFIS are 0.025–0.975, 1.08–7.84 and 1.22–35.61 ms respectively with 95% confidence.

5. Experiments

In this section, we first present the network performance of the target mesh topology using a modelled emulation scenario and then report on an experimental setup with the adaptive configuration in a food processing plant demonstrator.

5.1. Emulation Environment

For further validation of our proposed scheme, we run another experiment in an emulated environment using Mininet. Although the accuracy of Mininet cannot be taken for granted particularly for large scale topologies, the SDN community adopts it widely. In our case, we are essentially interested in looking at the expediency of our proposed solution before we investigate it with limited functionality in a real testbed. Therefore, we deploy a small mesh network of five (5) switches and a Ryu controller [29] as shown in Figure 7. The Ryu controller is tailored to incorporate the three flow installation schemes, and Spanning Tree Protocol (STP) is implemented to discard any possibility of creating a loop. We generate the plant level network packets from openflow switch#2 (source) to openflow switch#5 (sink) and vice versa. We varied the rate of packets generated from source to sink and measure the latency and success rate for the three flow installation schemes. We present the results in Figure 8 and Figure 9. In Figure 8, we present the latency for each flow installation scheme against a varying number of packets generated per second. The latency of RFIS increases linearly with the increase in the number of packets while PFIS and HFIS show a similar pattern. The latency bound of PFIS and HFIS are 1–3 ms and 3–7 ms respectively, therefore for this setup, the guaranteed delay is <7 ms. Figure 9 shows the success rate of the three flow installation schemes against a varying packet rate. The success rate for PFIS and HFIS varies from 98–99% and 97–99%.
From the results it was found that the HFIS retains a consistent low latency and high success rate as well as maintaining the flexibility and dynamic behavior of SDN.

5.2. Test Bed Implementation

The demonstrator bottling plant comprises sensors and actuators such as conveyor belts, physical and vacuum grippers, robots and a turning table. We designed and implemented the test bed experiment to study the performance of the proposed SDIAN model. To do so, we transformed some parts of the demonstrator plant to be controlled by the RPis, while other parts rely on classical PLC solutions and vendor specific robot controllers. The portion that was controlled by the RPis includes a conveyor belt carrying bottle caps, sensors to detect when a cap arrives and a robotic arm as an actuator that will pick the cap and restore it into the designated location. The behavior of the sensors and actuators are determined by the controller and accordingly the script is pushed into the RPi. In this experiment, we have replaced two of the traditional PLCs with RPI-based PLCs to control a small set of sensors/actuators mounted on the Festo plant demonstrator. As shown in Figure 10, we interface two RPi-based PLCs (RPI-1 and RPI-2) with one of the gear boxes from the food demonstrator plant to get connectivity with a set of sensors and actuators. The two RPi-based PLCs are connected to a controller through a control channel. We use a python script to read, write and process signals from/to the I/O pin of the RPi. The python script replicates the standard behavior of traditional PLCs. We deploy a controller application in the controller to facilitate flow control communication between controller and RPi-based PLCs.
Figure 11 presents the collage of a few snapshots of our test bed setup. It briefly demonstrates the different stages of the experiment. Clockwise from top left: a python script running on an RPi-based PLC replicates a traditional PLC, interfacing of RPis with sensors/actuators through the gear box, a robotic arm picking the desired object based on the instruction received from the corresponding RPi, and placing the object into a designated conveyor belt.
The supplementary video clip demonstrates that the transformed architecture is working in a small-scale testbed experiment.

6. Conclusions

In this paper, we have explained the characteristics of SDN in the context of industrial automation. We highlighted the design of two flow installation schemes to precisely synchronize the industrial automation processes as well as presenting the potential benefits and opportunities of SDN. Furthermore, we have presented our architectural model that utilizes SDN and brought this into the context of an ongoing demonstrator project. Future work comprises the use of our demonstrator in current industry and academic projects. We are addressing both the challenges of the industrial automation hardware as well as integrating SDN into the communications utilizing software configurable devices.
Limitations of the demonstrator constrain evaluation of the proposed framework at this stage; however, the results obtained provide support for the approach and future work. For simplification, we limit our work to wired network technologies although it is evident that the approach could be extended to the integration of wireless (e.g., sensor network) with the wired network to achieve a unified architecture. The inclusion of wireless networks will introduce challenges including the seamless integration between the controllers across the wired and wireless domains. We also have limited our scope to one plant; therefore, the validation of using multiple controllers across multiple plants and the east-west communication are left unexplored and identified as future work. In the proposed framework, each RPi-based PLC is also used as an SDN switch, in future we may consider the use of lightweight SDN switches such as Zodiac FX, which could reduce the chance of bottlenecks across the RPis and clearly separate the forwarding devices from underlying field level sensors and actuators.

Supplementary Materials

The following are available online at https://www.mdpi.com/2224-2708/7/3/33/s1. Video S1: The supplementary video clip of the demonstrator.

Author Contributions

K.A. was responsible for conceptualization, data curation, investigation, methodology, validation, visualization, and writing the original draft. M.G. reviewed and edited the manuscript. J.O.B. and H.S. participated in conceptualization and reviewed the final manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors wish to thank the Virtual Experiences Laboratory (VXLab), RMIT University. VxLab ensures smooth access to the Festo-based food processing plant demonstrator in conducting test bed implementation. This work is also supported in part by the Australia-India Research Centre for Automation Software Engineering (AICAUSE), RMIT University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xia, W.; Wen, Y.; Foh, C.H.; Niyato, D.; Xie, H. A Survey on Software-Defined Networking. IEEE Commun. Surv. Tutor. 2015, 17, 27–51. [Google Scholar] [CrossRef]
  2. Huang, T.; Yu, F.R.; Zhang, C.; Liu, J.; Zhang, J.; Liu, J. A Survey on Large-scale Software Defined Networking (SDN) Testbeds: Approaches and Challenges. IEEE Commun. Surv. Tutor. 2016, 19. [Google Scholar] [CrossRef]
  3. Hu, F.; Hao, Q.; Bao, K. A Survey on Software-Defined Network and OpenFlow: From Concept to Implementation. IEEE Commun. Surv. Tutor. 2014, 16, 2181–2206. [Google Scholar] [CrossRef]
  4. Raghavan, B.; Casado, M.; Koponen, T.; Ratnasamy, S.; Ghodsi, A.; Shenker, S. Software-defined internet architecture: Decoupling architecture from infrastructure. In Proceedings of the 11th ACM Workshop on Hot Topics in Networks, Redmond, WA, USA, 29–30 October 2012. [Google Scholar]
  5. Skeie, T.; Johannessen, S.; Holmeide, O. Timeliness of real-time IP communication in switched industrial Ethernet networks. IEEE Trans. Ind. Inform. 2006, 2, 25–39. [Google Scholar] [CrossRef]
  6. Decotignie, J.D. Ethernet-Based Real-Time and Industrial Communications. Proc. IEEE 2005, 93, 1102–1117. [Google Scholar] [CrossRef][Green Version]
  7. Rojas, C.; Morell, P.; Sales, D.E. Guidelines for Industrial Ethernet infrastructure implementation: A control engineer’s guide. In Proceedings of the 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference, Colorado Springs, CO, USA, 28 March–1 April 2010; pp. 1–18. [Google Scholar]
  8. Gungor, V.C.; Hancke, G.P. Industrial Wireless Sensor Networks: Challenges, Design Principles, and Technical Approaches. IEEE Trans. Ind. Electron. 2009, 56, 4258–4265. [Google Scholar] [CrossRef][Green Version]
  9. Hou, L.; Bergmann, N.W. Novel Industrial Wireless Sensor Networks for Machine Condition Monitoring and Fault Diagnosis. IEEE Trans. Instrum. Meas. 2012, 61, 2787–2798. [Google Scholar] [CrossRef]
  10. Kopetz, H.; Ademaj, A.; Grillinger, P.; Steinhammer, K. The time-triggered Ethernet (TTE) design. In Proceedings of the Eighth IEEE International Symposium on Object-Oriented Real-Time Distributed Computing (ISORC’05), Seattle, WA, USA, 18–20 May 2005; pp. 22–33. [Google Scholar]
  11. Cronberger, D. Software Defined Networks. 2015. Available online: http://www.industrial-ip.org/en/industrial-ip/convergence/software-defined-networks (accessed on 20 July 2018).
  12. RAMI 4.0. Retrieved in August 2017. Available online: https://www.zvei.org/en/subjects/industry-4-0/thereference-architectural-model-rami-40-and-the-industrie-40-component/ (accessed on 20 July 2018).
  13. IIRA. Retrieved in August 2017. Available online: http://www.iiconsortium.org/ (accessed on 20 July 2018).
  14. Ahmed, K.; Blech, J.O.; Gregory, M.A.; Schmidt, H. Software Defined Networking for Communication and Control of Cyber-Physical Systems. In Proceedings of the 2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS), Melbourne, VIC, Australia, 14–17 December 2015; pp. 803–808. [Google Scholar]
  15. Li, D.; Zhou, M.T.; Zeng, P.; Yang, M.; Zhang, Y.; Yu, H. Green and reliable software-defined industrial networks. IEEE Commun. Mag. 2016, 54, 30–37. [Google Scholar] [CrossRef]
  16. Cronberger, D. The software defined industrial network. Ind. Ethernet Book 2014, 84, 8–13. [Google Scholar]
  17. Kalman, G.; Orfanus, D.; Hussain, R. Overview and future of switching solutions for industrial Ethernet. Int. J. Adv. Netw. Serv. 2014, 7, 206–215. [Google Scholar]
  18. Schweissguth, E.; Danielis, P.; Niemann, C.; Timmermann, D. Application-aware Industrial Ethernet Based on an SDN-supported TDMA Approach. In Proceedings of the 2016 IEEE World Conference on Factory Communication Systems (WFCS), Aveiro, Portugal, 3–6 May 2016. [Google Scholar]
  19. Henneke, D.; Wisniewski, L.; Jasperneite, J. Analysis of realizing a future industrial network by means of Software-Defined Networking (SDN). In Proceedings of the 2016 IEEE World Conference on Factory Communication Systems (WFCS), Aveiro, Portugal, 3–6 May 2016. [Google Scholar]
  20. Schneider, B.; Zoitl, A.; Wenger, M.; Blech, J.O. Evaluating Software-Defined Networking for Deterministic Communication in Distributed Industrial Automation Systems. In Proceedings of the 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Limassol, Cyprus, 12–15 September 2017. [Google Scholar]
  21. Ahmed, K.; Nafi, N.S.; Blech, J.O.; Gregory, M.A.; Schmidt, H. Software defined industry automation networks. In Proceedings of the 2017 27th International Telecommunication Networks and Applications Conference (ITNAC), Melbourne, VIC, Australia, 22–24 November 2017; pp. 1–3. [Google Scholar]
  22. Open Networking Fundation. Software-Defined Networking: The New Norm for Networks. Available online: https://www.opennetworking.org/images/stories/downloads/sdn-resources/white-papers/wp-sdn-newnorm.pdf (accessed on 5 March 2018).
  23. Opennetworking.org. OpenFlow—Open Networking Foundation. 2016. Available online: https://www.opennetworking.org/sdn-resources/openflow (accessed on 10 April 2018).
  24. Kelly, F.P. Network routing. Philos. Trans. R. Soc. Lond. A Math. Phys. Eng. Sci. 1991, 337, 343–367. [Google Scholar] [CrossRef]
  25. Yufei, W.; Zheng, W.; Leah, Z. Internet traffic engineering without full mesh overlaying. In Proceedings of the INFOCOM 2001. Twentieth Annual Joint Conference of the IEEE Computer and Communications Societies, Anchorage, AK, USA, 22–26 April 2001; Volume 1, pp. 565–571. [Google Scholar]
  26. Li, Y.; Zhang, Z.L.; Boley, D. From Shortest-Path to All-Path: The Routing Continuum Theory and Its Applications. IEEE Trans. Parallel Distrib. Syst. 2014, 25, 1745–1755. [Google Scholar] [CrossRef][Green Version]
  27. He, K.; Khalid, J.; Gember-Jacobson, A.; Das, S.; Prakash, C.; Akella, A.; Li, L.E.; Thottan, M. Measuring control plane latency in SDN-enabled switches. In Proceedings of the 1st ACM SIGCOMM Symposium on Software Defined Networking Research, Santa Clara, CA, USA, 17–18 June 2015. [Google Scholar]
  28. Blenk, A.; Basta, A.; Zerwas, J.; Reisslein, M.; Kellerer, W. Control Plane Latency With SDN Network Hypervisors: The Cost of Virtualization. IEEE Trans. Netw. Serv. Manag. 2016, 13, 366–380. [Google Scholar] [CrossRef]
  29. Community, R.S.F. Ryu SDN Framework. 2014. Available online: https://osrg.github.io/ryu/ (accessed on 10 April 2018).
Figure 1. SDIAN (Software Defined Industrial Automation Network) conceptual architecture.
Figure 1. SDIAN (Software Defined Industrial Automation Network) conceptual architecture.
Jsan 07 00033 g001
Figure 2. SDIAN communication framework.
Figure 2. SDIAN communication framework.
Jsan 07 00033 g002
Figure 3. SDIAN packet dissemination model.
Figure 3. SDIAN packet dissemination model.
Jsan 07 00033 g003
Figure 4. Working mechanism of HFIS (Hybrid Flow Installation Scheme). (a) Table Look Up; (b) Flow Engineering (FE) and Rule Offload (RO); (c) Reaches Destination.
Figure 4. Working mechanism of HFIS (Hybrid Flow Installation Scheme). (a) Table Look Up; (b) Flow Engineering (FE) and Rule Offload (RO); (c) Reaches Destination.
Jsan 07 00033 g004
Figure 5. Histogram of monte carlo simulation results of (a) Hybrid Flow Installation Scheme (HFIS) (b) Reactive Flow Installation Scheme (RFIS) and (c) Pro-active Flow Installation Scheme (PFIS).
Figure 5. Histogram of monte carlo simulation results of (a) Hybrid Flow Installation Scheme (HFIS) (b) Reactive Flow Installation Scheme (RFIS) and (c) Pro-active Flow Installation Scheme (PFIS).
Jsan 07 00033 g005
Figure 6. Mean Latency for varied number of switches (a) Hybrid (b) Reactive (c) Pro-active. FIL—Flow Installation Latency, TL—Total Latency.
Figure 6. Mean Latency for varied number of switches (a) Hybrid (b) Reactive (c) Pro-active. FIL—Flow Installation Latency, TL—Total Latency.
Jsan 07 00033 g006aJsan 07 00033 g006b
Figure 7. Network setup in Mininet.
Figure 7. Network setup in Mininet.
Jsan 07 00033 g007
Figure 8. Latency in emulated environment.
Figure 8. Latency in emulated environment.
Jsan 07 00033 g008
Figure 9. Success rate in emulated environment.
Figure 9. Success rate in emulated environment.
Jsan 07 00033 g009
Figure 10. (a) Festo-based food processing plant demonstrator; (b) deployment diagram of the test bed.
Figure 10. (a) Festo-based food processing plant demonstrator; (b) deployment diagram of the test bed.
Jsan 07 00033 g010
Figure 11. Clockwise from top left: (a) python script (b) interfacing with gear box (c,d) task execution by robotic arm.
Figure 11. Clockwise from top left: (a) python script (b) interfacing with gear box (c,d) task execution by robotic arm.
Jsan 07 00033 g011
Table 1. Industrial communication protocols timeline.
Table 1. Industrial communication protocols timeline.
Industrial Communication Protocols Computer Networks
Protocol NamePublished byPlaceCom. TechYear Protocol NameYear
ModbusModular BusModicon (now Schneider Electric)United StatesMaster/Slave19791970–1980ARPANET1970
Ethernet1973
ISO/OSI1978
PROWAYProcess Data HighwayWorking Group 6 19811981–1990MAP1980
FIPFrench Initiaticefactory instrumentation protocolFranceProducer/Consumer1982
BitbusBIT FieldbusIntel CorporationUSAMaster/Slave1983
HARTHighway Addressable Remote TransducerFieldComm GroupUSAMaster/Slave1985Internet1981
CANController Area NetworkRobert Bosch GmbHDetroit, MichiganProducer/Consumer, Peer to Peer1985
P-NETProcess NetworkProcess-Data Silkeborg ApSDenmarkMaster/Slave1987MMS1985
INTERBUSINTERBUSPhoenix ContactGermanyMaster/Slave1987
PROFIBUSThe Federal Ministry of Education and Research (BMBF)process field busGermanyMaster/Slave, Peer to Peer1989Ubiq. Comp1988
EIBEuropean Installation Bus (EIB)EIB AssociationEuropeMaster/Slave19911991–2000WWW1992
AsiActuator Sensor Interface AS-InternationalGermanyMaster/Slave1992
SDSSmart Distributed System HoneywellUSAMaster/Slave19932G GSM1996
DeviceNetConnecting DevicesAllen-BradleyUSAProducer/Consumer1993
FF WLAN1997
ControlNetReal-Time Control NetworkRockwell AutomationUSAProducer/Consumer1995
TTPTime-Triggered-ProtocolVienna University of TechnologyVienna, AustriaMaster/Slave1998IoT1997
PowerlinkEthernet PowerlinkB&R Industrial Automation GmbHAustriaProducer/Consumer20012001–2010Bluetooth2003
Modbus/TCPModbus RTU protocol with a TCP interface that runs on EthernetModicon (now Schneider Electric)United StatesMaster/Slave2001SOAP2003
PROFINETProcess Field NetProfibus & PROFINET InternationalGermanyReal-Time Ethernet20013G: UMTS2001
EtherCATEthernet for control automation technologyBeckhoff Automation GermanyMaster/Slave2003ZigBee2003
ISA 100.11aWireless Systems for Industrial AutomationInternational Society of AutomationWorldwideNIL20093G: HSPA2005
UWB2008
Wire. HARTWireless HARTHART Communication FoundationUSAMaster/Slave20076loWPAN2009
4G: LTE2010
Table 2. Software-Defined Network Timeline.
Table 2. Software-Defined Network Timeline.
Framework/ConceptBrief DescriptionYear Published
Software Defined Industrial Network [16]Reflects the possibility of bringing programming capability in industrial network through the use of SDN. A theoretical framework is provided2014
Outlook on Future Possibilities [17]Possible evolution of industrial Ethernet using SDN2014
SDNPROFINET [14]Proposed to transform the typical communication architecture of PROFINET integrating SDN 2015
SDN-based TDMA in IE [18]SDN approach is used to formulate an application-aware Industrial Ethernet Based on TDMA2016
SDIN [15]Propose a new Software Defined Industry Network (SDIN) architecture to achieve high reliability, low latency, and low energy consumption in Industrial Networks2016
Challenge and Opportunities [19]Prospect of future industrial network by means of SDN2016
Direct Multicast Routing [20]Evaluates SDN for deterministic communication in distributed industrial automation systems2017
SDIAN [21]Software-defined industry automation networks2017
Table 3. SDIAN (Software Defined Industrial Automation Network) architectural components.
Table 3. SDIAN (Software Defined Industrial Automation Network) architectural components.
ComponentTaskLayer
RPiReceive and send interrupt to sensors and actuatorsData Plane
SensorsSends an interrupt to an associated RPi immediately after sensing an objectData Plane
ActuatorsExecutes the explicitly specified action immediately after receiving an interrupt from RPiData Plane
Southbound Interface (SBI)Interface between data and controller plane. The functions realized through this interface include, but not limited to: (i) programmatic control of all forwarding operations (ii) monitoring (iii) network statistics (iv) advertisement and (v) event notificationBetween Control and Data Plane
ControllerManage/control network services. It consists of NBI and SBI agents and control logic. A logically centralized but physically distributedControl Pane
Northbound Interface (NBI)Interface between application and controller plane. It typically provides an abstract view of the network and enables direct expression of network requirements and behaviorBetween Application and Control Plane
ApplicationsPrograms in execution that explicitly translate the business use case, network requirements and, behavior programmatically and logically to the controllerApplication Plane
Table 4. Summary statistics of stochastic analysis.
Table 4. Summary statistics of stochastic analysis.
Sample SizeHFISRFISPFIS
10,00010,00010,000
Central TendencyMean3.155339.847860.19091
Median2.037665.403820.163
StErr0.067060.272450.00119
SpreadStDev6.709526.72390.1189
Max88.7062883.672010.598
Min0.67730.67730.003
Range88.0288882.99460.595
Q(0.75)3.015710.22420.264
Q(0.25)1.51962.77740.098
Q Range1.49607.44670.166
ShapeSkewness10.393215.93690.8368
Kurtosis118.0304330.85790.0093
Quantiles, Percentiles, Intervals90% IntervalQ(0.05) = 1.17Q(0.05) = 1.34Q(0.05) = 0.04
Q(0.95) = 6.01Q(0.95) = 24.9Q(0.95) = 0.42
95% IntervalQ(0.025) = 1.08Q(0.025) = 1.22Q(0.025) = 0.03
Q(0.975) = 7.84Q(0.975) = 35.61Q(0.975) = 0.47
95% CI for the MeanUpper Limit3.02109.57950.1883
Lower Limit3.283910.71750.1930
Back to TopTop