Next Article in Journal
A Comprehensive Review of Cyber Security Vulnerabilities, Threats, Attacks, and Solutions
Previous Article in Journal
Hardware Emulation of Step-Down Converter Power Stages for Digital Control Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Declarative Secure Placement of FaaS Orchestrations in the Cloud-Edge Continuum

Department of Computer Science, University of Pisa, 56127 Pisa, Italy
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(6), 1332; https://doi.org/10.3390/electronics12061332
Submission received: 17 February 2023 / Revised: 6 March 2023 / Accepted: 8 March 2023 / Published: 10 March 2023
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
The decision-making related to the placement of applications made from orchestrated serverless functions onto Cloud-Edge infrastructures is a challenging problem as it must consider functional and non-functional requirements. In this article, we propose a novel declarative methodology to determine the placement of FaaS orchestration onto Cloud-Edge resources while satisfying all the requirements of the FaaS orchestrations and relying on information-flow analyses and padding techniques to prevent information leaks through side channels. A motivating example from Augmented Reality is used to showcase the open-source declarative prototype implementing our proposal. Besides, the prototype is assessed via simulation to evaluate execution times, placement success rates and energy consumption.

1. Introduction

Recently, the Function-as-a-Service (FaaS) paradigm has grown exponentially in the Cloud-Edge scenario. Multiple proposals emerged [1,2,3] to exploit the FaaS paradigm to meet Internet of Things (IoT) applications, bringing the FaaS closer to the Cloud-Edge continuum [4,5], where geographically distributed devices—from the edge of the network to Cloud data centres—are employed to support the stringent Quality of Service (QoS) constraints of next-gen IoT applications [6,7,8]. Indeed, FaaS orchestrations are naturally suited to support the event-based and stateless computation of IoT applications, and the on-demand execution of functions can improve the usage of (limited) edge resources [9,10].
However, the adoption of FaaS in Cloud-Edge scenarios comes with various challenges of foundational and technological nature [1]. In particular, a key issue concerns the definition of novel and effective strategies to place serverless functions over the pervasive computing nodes of the Cloud-Edge continuum. Indeed, the problem of placing functions onto infrastructure nodes satisfying QoS constraints (e.g., latency [3]), hardware and software requirements and bindings with (micro-)services needed by the functions (e.g., databases, payment providers, machine learning engines) is particularly challenging. Also, guaranteeing the security of application data is a fundamental aspect to consider [11,12] since a security flaw in a single function (or in its interactions with external services) may compromise the overall security of the application.
Placing FaaS applications onto Cloud-Edge infrastructures, therefore, needs to consider different security contexts, the security level of data flowing between functions, and trust relations among application operators, and infrastructure and external service providers. All these are utterly important to avoid data leaks, e.g., by placing functions that handle sensitive data onto insecure or untrusted nodes, or by binding a function to a service managed by an untrusted provider [1].
In this article, we tackle the problem of determining eligible placements of FaaS orches-trations—compositions of serverless functions—onto Cloud-Edge infrastructure resources, considering the information flow of the orchestration to protect data confidentiality while also satisfying functional—hardware, software and service—and latency requirements. We also propose an orchestration padding technique to tame data leaks towards potential attackers and particular attention is given to functions’ service invocation and bindings, allowing the writing of customised security policies to define function security contexts and adding mocked service invocations to confuse the attacker. Moreover, we consider trust relations among application operators, infrastructure and external service providers to sort eligible placements as per their trust level.
We extend our previous work [13] on secure placement of FaaS orchestrations by (i) enhancing the description of the attacker model so to include more examples and a more precise description of potential attacks, and (ii) assessing our proposed methodology via simulations at ranging infrastructure sizes so to evaluate execution times, success rates and energy consumption of a baseline placement strategy against ours that performs orchestration padding to mislead the attacker.
The rest of this article is organised as follows. After introducing essential notions to prepare the ground for our methodology (Section 2.2), we present a lifelike motivating example (Section 3), illustrating an Augmented Reality (AR) application that will be used to describe our declarative modelling of applications and infrastructures and the placement strategy (Section 4) prototyped in a Prolog open-source tool, SecFaaS2Fog (Available at https://github.com/di-unipi-socc/SecFaaS2Fog, accessed on 17 February 2023), described in [13]. Subsequently, we show the results of the experimental assessment of SecFaaS2Fog aiming at investigating the performance and energy consumption of our security countermeasures, discussing our methodology in latency-constrained scenarios (Section 5). Finally, after presenting closely related work (Section 6), we conclude the article by pointing out some interesting lines of future work (Section 7).

2. Considered Problem and Preliminaries

2.1. Considered Problem

Overall, our problem statement can be summarised as follows:
“Given a FaaS orchestration O, which processes data annotated with security labels L o and has a set of capability requirements R, and given a Cloud-Edge infrastructure composed of a set N of nodes featuring heterogeneous capabilities and annotated with security labels L n , determine a mapping of all functions in O to nodes in N such that all requirements in R are satisfied by the nodes’ capabilities and labels L o are compatible with those specified by L n to ensure data confidentiality of the deployed orchestration.”
More in detail, an eligible placement for a FaaS orchestration onto a Cloud-Edge infrastructure meets the following properties:
P1
every orchestration function is placed onto an infrastructure node having enough hardware and software capabilities to guarantee functional requirements, accumulating nodes’ hardware resources for every placed function,
P2
every orchestration function is placed onto an infrastructure node having latency from the node hosting the previous function at most the required latency to guarantee the QoS requirements,
P3
every orchestration function that requires invoking services is bound to service instances hosted by the infrastructure, and
P4
every orchestration function is placed onto an infrastructure node employing countermeasures to prevent data confidentiality leaks.
Our problem statement comes naturally equipped with four stakeholders we briefly outline below:
  • FaaS application operators, that provide information about orchestrations to be placed,
  • infrastructure providers, that provide information about nodes available for the placement,
  • service providers, that provide information about services exploitable by applications, and
  • the orchestrator service provider, which provides the service that manages deployment, execution and monitoring of FaaS orchestrations onto Cloud-Edge resources.
Attacking the above problem, we propose a methodology that aims at supporting the orchestration service in the decision-making process of the placement phase, in order to deploy a FaaS orchestration. We assume to receive events to trigger the orchestration and to look for eligible placements based on the information provided by the stakeholders.
Moreover, we consider trust relations among stakeholders by using the semiring-based modelling of [14]. Every stakeholder can declare their trust level and confidence toward another stakeholder as a pair of floating point numbers between 0 (no trust or no confidence) and 1 (full trust or full confidence). Those opinions build the trust network of the stakeholders used to aggregate opinions and rank the eligible placements as per their trust and confidence level.

2.2. Preliminaries

To drive the placement of a FaaS orchestration, we employ information-flow techniques [15] to determine the security context of every orchestration function. In information-flow security, labels are assigned to variables of a program to follow its data flow in order to verify desired properties (e.g., non-interference [16]) and avoid covert channels. Labels are ordered in a security lattice to represent the relation of the labels from the highest ones (e.g., top secret) to the lowest ones (e.g., public data). Security lattices can be arbitrarily complex total or partial orders, as epitomised in Figure 1, where a total order represents data secrecy from top secret to low secret (Figure 1a) and a partial order represents the data of a company where the CEO can access all the data, two separate offices can access their own data (research and development and human resources) and public data are accessible by everyone (Figure 1b).
In what follows, FaaS orchestrations are expressed by a suitable orchestration language whose syntactic structure (The language constructs are inspired by  [1]) is described below:
Exp  ::= Seq | If | Par | F
Seq  ::= seq(Exp, Exp)
If    ::= if(F, Exp, Exp)
Par  ::= par(Exp, Exp)
F   ::= fun(FId, Bindings, Latency)
An expression Exp is a sequence Seq, a conditional branch If, a parallel execution Par or a function F. The If statement admits two branches and its guard is represented by the first parameter among the outputs of the guard function F. Our model is indeed a form of fork-join parallelism where parallel branches are joined (merged) with a specific sequential joint. For instance, the expression seq(par(e1,e2),f1) describes the orchestration where the function f1 will be the synchronisation point of the parallel execution of e1 and e2. Note that in our orchestration language every function element fun is represented by its identifier FId, explicit Bindings defined by the application operator and the maximum Latency required from the previous function executed in the orchestration—or from the orchestration’s event generator in case of the first executed function.
The security context of serverless functions is determined by the data flowing through the orchestration, each function is annotated with a security label representing the importance of the data it manages. Moreover, also the scope of the functions in terms of grammar constructs must be considered when determining the security context, similar to the techniques employed by language-based information-flow security [16]. As an example, consider the following orchestration
if(fun(f1,_,_),seq(fun(f2,_,_),fun(f3,_,_)),fun(f4,_,_))
ctx(f1) = top
ctx(f2) = top
ctx(f3) = low
ctx(f4) = top
where the context (ctx) of every function is determined only by their managed data. The function f3 has low context and can be managed in less secure situations, e.g., it could be placed on a Cloud-Edge node without security capabilities. Leaking the data of f3 is not a problem, as they are in low context, but knowing that f3 was executed indicates that the first conditional branch was executed and the if guard was true, causing a data leak.

2.3. The Attacker Model and Security Constraints

From the security point of view, our goal is to protect the data confidentiality of placed FaaS orchestration from an external attacker, an entity that is not part of the stakeholder group. In our assumptions, the attacker has full knowledge of:
  • the available node resources, being disclosed by infrastructure providers (e.g., to make informed placement decisions),
  • the service placements, which are easily discoverable by impersonating service clients and tracing client requests, and
  • the FaaS orchestration expression, which we assume public as for open-source applications.
The attacker can also hack nodes with weak security contexts, e.g., with few security capabilities or physically accessible nodes. The security context of a node is determined as per security policies written by application operators.
Finally, by monitoring traffic between nodes, the attacker can discover when is performed a service invocation towards a node or when an event from a node triggers a function running on another node.
Under these hypotheses, an attacker can compromise the data confidentiality of the application. Particularly, we consider three possible data leaks:
  • Weak node leaks. Placing a function that receives sensitive data onto nodes with a weak security context can result in a leak of the function’s data. Therefore, it is crucial to track information flows through FaaS orchestrations in order to determine the security context of functions and to place them onto nodes with a compatible security context to avoid leaking data.
  • Service data leaks. Similarly, the binding of a function with a service instance placed on a node with a weak security context can result in the leak of the data exchanged with that service. It is, therefore, important to also match the security contexts of functions and their invoked services to avoid leaking sensitive data.
  • Control-flow leaks. The placement and the execution of functions being part of different conditional branches can also result in the leak of the guard value of an If statement. When the attacker detects the execution of even a single function of a conditional branch, the value of the guard could be inferred by the attacker. It is, therefore, important to make the execution of the alternative branches indistinguishable for the attacker to avoid leaking the guard value of an If statement.
To prevent the first two types of attacks, we defined security constraints of eligible placements involving functions, services and nodes security contexts. As aforementioned, we determine the security contexts of orchestration functions by employing information flow techniques to assign a security label to each function to be placed. To determine the security context of infrastructure nodes and services we apply security policies defined by application operators to assign a security label to every node and service. All the security labels refer to the same security lattice, which is also defined by the application operator.
Given the ≤ operator defined by the ordering of the security lattice, we defined the following security constraints for the eligible placements.
For all function f placed on a node n : l a b e l ( f ) l a b e l ( n )
For all function f bound to the service s : l a b e l ( f ) l a b e l ( s )
For all bound service s placed on a node n : l a b e l ( s ) l a b e l ( n )
We now comment on the security constraints above. Satisfying the security constraint  ( 1 ) is a countermeasure to the Weak node leak. When a function is placed on a node with a security context high enough to avoid an attack on its data, the confidentiality of the data is protected.
Satisfying both the security constraints ( 2 ) and ( 3 ) is a countermeasure to the Service data leak. When a function is bound to a service to be invoked, the service must have a security context high enough to exchange data with the function without compromising its confidentiality. Moreover, the node hosting the service must have a security context high enough to avoid an attack that discloses the data exchanged with the service.
To manage the Control-flow leak due to the If statement, we introduce a suitable transformation technique, called padding, that makes indistinguishable the execution of the two conditional branches of an If statement. The intuition is to align the requirements of the two branches by increasing—padding—the requirements of every function toward the maximum common requirements. To balance—pad—the number of functions of the branches expressions, we insert suitable dummy functions that do not process any data but allocate resources and perform mock service invocation. To exemplify the padding transformation, consider the orchestration
if(fun(f1,_,_),fun(f2,_,_),fun(f3,_,_))
where f2 needs 1 GB of RAM and f3 needs 2 GB of RAM. At execution time the attacker can detect the allocation of 2 GB of RAM and discover that f3 was executed, leaking the false result of f1. Balancing the required RAM of f2 and f3 to 2 GB, where every orchestration execution allocates the same amount of memory, makes which function was executed indistinguishable for the attacker and, thus, the value of the If guard is protected.
The Prolog implementation of the padding technique is illustrated in Section 4.4.

3. Motivating Example

This section illustrates a lifelike motivating use case to highlight the challenges related to the FaaS orchestration placement. This example will be used to illustrate both the declarative model and the placement methodology.
Consider an Augmented Reality (AR) application trying to avoid people gatherings in commercial areas of a city centre, to contrast the spreading of COVID-19, by load-balancing people’s presence across similar shops. When users frame with their smartphones’ cameras the entrance of a shop they are about to enter, the application renders over the display information about the shop and the regulation in force in the area depending on whether the user has a valid EU Digital COVID Certificate (DCC).
Such an application is implemented through the FaaS orchestration shown in Figure 2.
The entry point of the orchestration is f L o g i n , which authenticates the user into the application. Then two parallel tasks are executed. On one hand, the screenshot taken by the smartphone is elaborated by f C r o p , which recognises the shop entrance in the framed images to discriminate among physically close activities. Then, f G e o identifies the shop by its image and coordinates via an external map service, also retrieving all shop information. On the other hand, f D C C checks whether the user has a DCC. If so (T branch), f c h e c k D C C checks the certificate and retrieves the regulations that apply. Otherwise (F branch), f R u l e s retrieves the regulations for people without a valid DCC. Last, f A R synchronises the parallel executions and puts together the retrieved information, rendering it over the framed image before sending it back to the mobile client.
Every function in Figure 2 is annotated with its inputs, outputs and with the parameters exchanged with the external services it invokes. For instance, f G e o inputs Screenshot and Coordinates (coming from f C r o p ), and exchanges Coordinates and ShopInfo with the maps service, eventually outputting Screenshot and ShopInfo to  f A R .
Function-to-function and function-to-service latency constraints are annotated on the corresponding links. For instance, the maximum latency between f L o g i n and f C r o p is 18 ms and between f L o g i n and its required userDB is 13 ms.
Table 1 lists the software and hardware requirements, and the service types needed by each function to run successfully. For instance, the function f L o g i n needs javascript to be available on the deployment node, along with 1 GB of RAM, and 2 vCPUs at 0.5 GHz at least.
To prevent sensitive data leaks, input parameters are labelled with suitable security types—e.g., topmediumlow—modelling the security level pertaining to each piece of data from top (i.e., secret) to low (i.e., public) representing the security lattice of Figure 1a. Those security labels propagate with data along the function orchestration (and across external services), determining the security context for each function (and service).
Figure 3 sketches the target Cloud-Edge infrastructure to deploy and run the above orchestration. Nodes feature different software and hardware capabilities, hosted services (depicted as hexagons), and security countermeasures (expressed as per the taxonomy of [12]). For instance, node antenna1, provided by telco, features Python and Javascript, 2 GB of RAM, 3 vCPUs at 1.5 GHz, and public key encryption. Also, it hosts the service rulesChk that is of type rules and is provided by the public administration (pa).
Based on node security capabilities, an application operator can express security policies. For instance, assume that a node featuring anti-tampering and public key encryption capabilities is considered top secure (e.g., ispRouter), while a node with only public key encryption is medium secure (e.g., antenna2), and a node featuring neither countermeasures is low secure (e.g., private1). Similarly, application operators might express security policies on services, involving service providers. For instance, a service provided by the public administration (pa) is considered top secure, while a service provided by an open-source service provider (openS) is considered low secure.
Links between nodes are annotated with their end-to-end latency, e.g., The latency between ispRouter and antenna2 is 8 ms. We assume that orchestration triggers come from event generators—invoking suitable orchestrator API—that are connected with infrastructure nodes. In our use case, the event generator is the mobile client application, which we assume is connected to the ispRouter.
Finally, the declared opinion of each stakeholder involved with the AR application, from the application operator to the service and node providers, contributes to building the trust network of Figure 4. As an example, the application operator (appOp) declares to trust the telecommunication operator (telco) with 0.99 trust level and 0.9 confidence and to trust the cloud provider (cloudProvider) with 0.9 trust level and 0.9 confidence.
Placing the orchestration of Figure 2 onto the infrastructure of Figure 3 to meet all of its software, hardware and latency requirements, and to satisfy their service bindings, is a challenging problem.
Besides, as we will detail next, uninformed placement decisions can open side channels and consequently leak sensitive data towards external attackers.
With reference to the attacker model of Section 2.3, we show examples of the three individuated attacks:
  • Weak node leaks. The attacker can easily hack node private1 that has no security capabilities and is labelled low as per the policies discussed before. Placing f D C C on such node exposes the application to the leak of UserInfo, which includes sensitive personal information about application users.
  • Service data leaks. Considering the function f G e o the binding with the service instance openM—placed on the low node private1—exposes the application to the leak of Coordinates and ShopInfo, which discloses the information on users’ current position.
  • Control-flow leaks. Our attacker can understand (through resource monitoring and knowledge of the orchestration structure) which of the two alternative functions f c h e c k D C C and f R u l e s is executed. Indeed, if RAM usage at a certain node decreases of 1.6 GB, and 2 vCPUs are allocated with service invocations towards a node hosting an instance of the dccChk service, then the attacker could infer f C h e c k D C C execution and that, consequently, the value of the If guard was true.

4. Methodology and Prototype

In this section, we discuss and illustrate the feature of the Prolog representation of FaaS orchestrations and Cloud-Edge infrastructures (Section 4.1) and the placement methodology (Section 4.2) employed by SecFaaS2Fog.
We recall that a Prolog program is a finite set of clauses of the form: a: −b1, …, bn. stating that a holds when b1 bn holds, where n ≥ 0 and a, b1, …, bn are atomic literals. Clauses with an empty condition are also called facts. Prolog variables begin with upper-case letters, lists are denoted by square brackets, and negation by \+.

4.1. Modelling FaaS Orchestrations, Infrastructure Capabilities and Trust

4.1.1. Modelling FaaS Orchestrations

We start by focusing on how application operators declare the requirements of each function they will orchestrate into an application by the Prolog clause
functionReqs(FId, SWReqs, HWReqs, ServiceReqs).
where FId is the function identifier, SWReqs and HWReqs are its software and hardware requirements, and ServiceReqs are the service types the function will need to bind to at runtime. Hardware requirements are represented as triples with the required RAM in MB, the needed number of vCPUs and the minimum CPU frequency in Hz.
For each function FId the relation between input and output security types, and security types of data exchanged with external services, is declared as
functionBehaviour(FId, Inputs, ServiceParams, Outputs):- …
defining as lists the security types of the Inputs, ServiceParames and Outputs of the considered function. Clauses of this form are called function behaviour, and we assume they can be defined manually or obtained by exploiting suitable static analyses (e.g., [17]).
Example 1.
The function requirements and behaviour (specified in Figure 2 and in Table 1) of f L o g i n can be declared as
 
functionReqs(fLogin, [js], (1024, 2, 500), [(userDB, 13)]).
functionBehaviour(fLogin,[UA,Screen,Coord],[UA,UA], [UA,Screen,Coord]).
determining the constant that the security label of UserAuth, Screenshot and Coordinates are preserved by the function execution maintain their security label during the execution (the two lists [UA,Screen,Coord]). The parameters UserInfo and UserAuth have the same security label. This is represented by the same variable (UA). Note that functionBehaviour/4 allows assigning security labels to all the data of the function, given the input security labels.
Function orchestrations are declared as in
functionOrch(FOId, EventTrigger, Structure).
where FOId is the orchestration identifier and Event is the event triggers the orchestration. Structure is the structure of the orchestration expressed through the linguistic primitives introduced in Section 2.2.
Example 2.
The declaration of the orchestration arOrch , sketched in Figure 2, is shown below:
 
functionOrch(
  arOrch,
  (event1, [top,low,medium]), %trigger
  seq(fun(fLogin,[myUserDb],25),
    seq(par([
          if(fun(fGP,[],15),
              fun(fCheckGp,[],15),
              fun(fRules,[],18)),
          seq(fun(fCrop,[],18),fun(fGeo,[],12))]),
    fun(fAR,[],18)))
).
Note that orchestration triggers are denoted as pairs containing the event trigger (e.g., event1) and a list of security labels of its parameters (e.g., [top,low,medium]).
Those security labels will match the input types of the first function (viz. fLogin), and propagate along the orchestration according to declared function behaviours.
Orchestrated functions are denoted as triples containing a function identifier, the list of actual service instances each function will bind to, and the maximum tolerated latency from the previous function. The bindings lists either specify a service instance identifier (e.g., myUsersDB of fLogin), or are left unbound (e.g., [] in fGeo), meaning that any service of the required type can satisfy the requirement.

4.1.2. Modelling Infrastructures

Infrastructure nodes are declared via facts of the form
node(NId, Provider, SecCaps, SWCaps, HWCaps).
where NId is the node identifier (e.g., its IP address), Provider is the node owner, SecCaps, SWCaps and HWCaps are the security, software and hardware capabilities featured by the node, respectively.
End-to-end links between nodes are declared as
latency(NId1, NId2, Latency).
with the average Latency in milliseconds experienced between nodes NId1 and NId2.
External services, to be bound to function instances, running on available nodes are declared by their service provider as
service(SId, Provider, SType, NId).
where SId is the service identifier, Provider is the service provider, SType is the service type, and NId is the identifier of the service host node.
Finally, event generators are declared as
eventGenerator(GId, EventList, SourceNode).
where GId is the generator identifier, EventList is the list of events it can generate, SourceNode is the identifier of the infrastructure node to which it connects.
Example 3.
A subset of the infrastructure described in Section 3 is declared as
 
node(ispRouter,telco,[pubKeyE,antiTamp],[js,py3],(3500,16,2000)).
node(switch, university, [pubKeyE], [py3,js],(2048, 2, 2000)).
latency(ispRouter, switch, 5).
service(myUserDb, appOp, userDB, ispRouter).
eventGenerator(userDevice,[event1,event2],ispRouter).
denoting the ispRouter and switch nodes, the bidirectional link between them, the service myUserDB placed on ispRouter and the event generator userDevice, connected to ispRouter.
Declaring Security Policies. The security context of nodes and external services is determined through the security policies specified by application operators. Node and service labellings exploit the same security type of FaaS parameters (i.e., low, medium, top) to guarantee that types are checked against each other. The type information also enforces that functions are placed onto nodes and interact with services that feature at least their security type.
Node labelling is declared through predicates of the form
nodeLabel(NodeId, Label) :- …
where NodeId is the node identifier and Label is the label to be assigned to it if all conditions of the right hand-side of the rule hold.
Example 4.
The node security policies of Section 3 is declared as
 
nodeLabel(NodeId, top):-
    node(NodeId,_,SecCaps,_,_), subset([antiTamp,pubKeyE], SecCaps).
nodeLabel(NodeId, medium) :-
    node(NodeId,_,SecCaps,_,_
    \+ member(antiTamp, SecCaps), member(pubKeyE, SecCaps).
nodeLabel(NodeId, low):-
    node(NodeId,_,SecCaps,_,_), \+ member(pubKeyE, SecCaps).
where a node is labelled top only if the node features both anti-tampering and public key encryption among its security capabilities. Conversely, it is labelled medium if the node features exclusively public key encryption, and low in case public key encryption is not available.
Analogously, The external services security policies can be declared as
serviceLabel(SId, SType, Label) :- …
where SId is the service identifier, SType is the service type and Label is the label to be assigned to it if all conditions of the right hand-side of the rule hold.
Example 5.
The service security policies of the example of Section 3 are expressed as
 
serviceLabel(SId, _, top) :- service(SId, appOp, _, _).
serviceLabel(SId, _, top) :- service(SId, pa, _, _).
serviceLabel(SId, maps, medium) :- service(SId, cloudProvider, maps, _).
serviceLabel(SId, Type, low) :-
    service(SId, Provider, Type, _),
    \+(Provider == appOp),
    \+((Provider == cloudProvider, Type == maps)).
where a service is whitelisted, and so labelled top, if its provider is appOp or pa, it is labelled medium if its provider is cloudProvider and it is of type map, and it is labelled low in all the other cases.

4.1.3. Declaring Trust Opinions

All involved stakeholders (i.e., application operators, infrastructure providers, service providers) can declare trust opinions towards each other. Following [14], trust relations are modelled as pairs ( T , C ) [ 0 , 1 ] × [ 0 , 1 ] where T represents a level of trust (the higher the better) and C the confidence in (i.e., The quality of) such value T, based on monitored trust data. By employing a dialect of Prolog, viz. α -Problog [18], trust opinions are declared as facts annotated with ( T , C ) , in the form
(T,C)::trustOpinion(Stakeholder1, Stakeholder2).
where Stakeholder1 declares her trust level and confidence toward Stakeholder2.
Example 6.
The trust opinions of the application operator appOp are declared as
 
(0.9,0.9)::trustOpinion(appOp, cloudProvider).
(0.99,0.9)::trustOpinion(appOp, telco).
indicating that appOp trusts the cloudProvider with 0.9 trust level and 0.9 confidence, and the telco with 0.99 trust level and 0.9 confidence.

4.2. Declarative Secure Placement of Faas Orchestrations

We now have introduced all the basic building blocks to describe how our prototype, SecFaaS2Fog, places FaaS orchestrations onto Cloud-Edge infrastructures considering functional and non-functional requirements and guaranteeing information-flow security through informed placement decisions and padding. This is the main topic of this section.
The overall functioning of SecFaaS2Fog is implemented by the Prolog program of Figure 5.
After retrieving an orchestration with its requirements (line 2) and checking that it is well-formed according to the orchestration grammar (line 3), SecFaaS2Fog follows three main steps:
  • typing/3, in which the security labels of the orchestration trigger are propagated to all the functions of the orchestration in order to determine the security context of each function (Section 4.3),
  • padding/2, in which the function’s requirements of conditional branches are balanced in order to tackle the Control-flow leak (Section 4.4), and
  • placement/3, in which the orchestration is placed onto the infrastructure assigning every function to a node, and binding functions to the needed service. Such assignments meet all software, hardware, security context, and latency requirements of the orchestration (Section 4.5).

4.3. Typing Orchestrations

The typing/3 predicate (Figure 6) assigns a security label to each function in the considered orchestration, thus determining the function security contexts. The assigned labels are determined by propagating security types along the orchestration flow (i.e., matching input and output parameters) exploiting function behaviours, according to the following rules:
(1)
functions are labelled as per the highest label of their parameters,
(2)
functions within the scope of an if statement are labelled as per the highest label between the label determined as per (1) and the label of the guard of their if statement (In case of nested if statements, we consider the highest guard label).
The predicate typing/3 retrieves the lowest security type of the security lattice (line 8) and initialises the typing performed by the predicate typing/5 by setting the scope type of a guard as the lowest and the trigger types as input of the first function (line 9).
The predicate typing/5 goes through the code of the orchestration updating the scope type and propagating the types from the output of a function to the input of the successive one. Figure 7 shows the clause to type each function F with its security context Label.
The predicate changes the representation of the functions fun(F,Bindings,Latency) into ft(F,Label,Binding,Latency) (line 10). All the parameters of F are instantiated via the predicate functionBehaviour/4 using the input types InTypes (line 11). Then, a list of all the types involved with the function is created, starting from the input types and the interaction types (line 12) and finishing with the output types (line 13). Finally, the security label of the function is determined by the highest type of the list (line 14), as stated in rule (1). Then the function F is inside an if statement, also the variable GuardType (lines 10 and 12) concurs in the highest type determination, as per rule (2).
Example 7.
Figure 8 shows a possible labelling for the orchestration of the use case of Section 3, assuming that trigger data (UserAuth, Screenshot, Coordinates) are labelled as (top, low, medium). The first function f L o g i n is labelled top, having this as the maximum label among its parameter. Then, the labels are propagated to the two parallel executions according to the behaviour of f L o g i n . The outputs are labelled (top, low, medium), corresponding to the parameters (UserInfo, Screenshot, Coordinates). Therefore, f D C C is labelled top and it propagates the label top corresponding to the parameter UserInfo. The propagation continues through the orchestration similarly, until all functions are labelled.

4.4. Padding Orchestrations

To tame Control-flow leaks, SecFaaS2Fog transforms the requirements of functions in the conditional branches of if statements, making their execution indistinguishable for the attacker of Section 2.3.
The padding/2 predicate navigates the orchestration looking for if statements (In case of nested if statements, padding applies first to the inner if). When an if is reached, its two conditional branches are padded by the paddingIf/4 predicate of Figure 9.
The predicate transforms two branches (TrueBranch and FalseBranch) into two padded branches (TrueBranchPadded and FalseBranchPadded, line 15). Initially, it checks that neither branch has been fully explored (line 16). Then, the first function is extracted from the true branch (Ft of line 17) and from the false branch (Ff of line 18), calculating the continuation on both branches. The two functions’ requirements are padded by the pad/4 predicate (line 19). Then, the padding proceeds recursively on the branches’ continuation, until both branches have been fully visited (line 20). Finally, the results of the recursive call are merged to build the TrueBranchPadded and FalseBranchPadded outputs (lines 21–22). Note that when the end of a branch is reached, predicate extractToPad/3 returns a dummy function to balance the number of functions of both branches.
Figure 10 lists the predicate pad/4, which pads two functions.
Initially, it checks if the two functions are in parallel executions and possibly it adds dummy functions to balance the parallel branches (line 24). Then, the predicate creates a list of functions and retrieves the lowest security type in the security lattice (line 25), which is used to determine the common requirements (line 26). Finally, both the left and right branch functions are padded to the common requirements (lines 27–28).
To find the common requirements and pad to them, two different aspects are considered: (i) padding of hardware, software and latency requirements, and (ii) padding of service requirements by adding to the orchestration dummy functions that perform mock service invocations. Hardware resources are padded by allocating for every pair of functions the same amount of resources, i.e., The maximum among the requirements of the functions. Similarly, software requirements become the union of required software by the functions. For the latency requirements, the minimum latency value is chosen and assigned to both functions. Last, the service bindings required by the two functions are also merged.
Example 8.
The padding of the AR application is applied to the only if statement of the orchestration. Figure 11 depicts the conditional branch of our motitating example, before and after the padding.
Before the padding (Figure 11a), the branches have only f c h e c k D C C and f R u l e s with the requirements declared by the application operator. After the padding (Figure 11b), the two branches are padded and the requirements are the same for both branches, making them indistinguishable for the attacker described in Section 2.3.
To perform the same service invocation from both branches, two dummy functions ( f S e r v i c e P a d 1 and f S e r v i c e P a d 2 ) are introduced using a seq statement at latency 0, in order to have both sequenced functions on the same node. Those two functions must have the same software and hardware requirements (i.e., Constant Reqs) to maintain the two branches indistinguishable from the attacker’s viewpoint. From the branches, there is a dccChk service invocation followed by a rulesChk service invocation. The padding brings f c h e c k D C C and f R u l e s to have the same software requirements (js and py3), hardware requirements (1.8GB RAM 2vCPUs 0.5GHZ). Also, the latency before the branches (15 ms) and toward the services (50 and 20 ms) are made identical for both branches.
The padding transformation we have illustrated comes at the cost of introducing more stringent constraints for the placement and the overall required resources. It is worth noticing that this is indeed required to (i) guarantee data confidentiality in presence of a possible Control-flow leak, and (ii) to enforce the security of application placement decisions. In Section 5 we investigate through experimental assessments the impact of the padding in terms of placement performance and energy consumption.

4.5. Orchestration Placement

After the typing step and the padding transformation, SecFaaS2Fog places a FaaS orchestration onto the target Cloud-Edge infrastructure by assigning each function to a node, and by resolving the required service bindings. To this purpose, SecFaaS2Fog determines an eligible placement for the (padded) orchestration.
An eligible placement for a FaaS orchestration onto a Cloud-Edge infrastructure, as determined by predicate placement/3 (Figure 12), meets the properties P 1 P 4 introduced in Section 1.
The placement/3 predicate retrieves the node of the event generator (line 30) to initialise the placement search performed by the predicate placement/7 with the list of nodes hosting predecessor functions and the initial placement, indicated by [] (line 31). The list is needed to join parallel execution, thus, the predecessor function is not always one.
The placement/7 navigates the orchestration to place typed functions (ft) and padded functions (fpad). Figure 13 shows the placement/7 clause for placing a typed function. The code to place padded functions is analogous, with the difference that the requirements used for the placement are not the ones declared by the application operator but the ones determined by the padding transformation.
For each function to be placed, the predicate non-deterministically selects a candidate node (line 33) checking the latency constraint between the selected node and the node of the previously allocated function to satisfy the property (2) of an eligible placement. Note that while processing the first function, the latency is checked from the event generator and while joining a parallel execution, the latency is checked from all the nodes of the last functions of the parallel branches. Subsequently, the compatibility of the node label against the function label to satisfy the property (3) of an eligible placement (line 34).
Then, the node capabilities (line 35) and the software and hardware requirements of the function are retrieved (line 36) and checked whether the node can host the function (line 37) to satisfy the property (1) of an eligible placement. If this is possible, the hardware allocation on the node is updated by summing the hardware required by the function to the previously allocated hardware of the node (line 38).
Finally, bindings to external services of the function are resolved by checking whether latency and security constraints are compliant with the required ones to satisfy the property (4) of an eligible placement (line 39). It is worth noticing that when a binding requirement is left unbound by the application operator (indicated by []), eligible services are determined in order to bind the function to a service instance that satisfies the service type, the latency and the security requirements.
Example 9.
Figure 14 sketches an eligible placement for the application of our motivating example.
Nodes are labelled—and thus coloured—according to the security policies expressed in Section 4.1. Dotted lines represent the binding of a function with a service instance, represented as a hexagon.
For instance, top labelled function f L o g i n is placed on top labelled node ispRouter and bound to myUserDB. It is worth noticing how this placement avoids Control-flow leaks. Considering only the service invocations, the attacker detects a gp service invocation from the antenna1 to the ispRouter by monitoring the traffic. From her information the functions executed should be f C h e c k D C C , denoting a true value of the if guard. Instead, the service invocation from antenna1 is performed by a dummy function on the false branch. The padding makes the execution of the two branches indistinguishable for the attacker, making it impossible to understand which function of the conditional branch is executed and, thus, to leak data.
Our methodology also considers trust relations by exploiting the approach of [14] to aggregate opinions from different stakeholders by taking advantage of a suitable semiring of trust opinions. The Problog accounting for trust relations [19] is open-sourced at: https://github.com/di-unipi-socc/FaaS2Fog/tree/main/Trust, accessed on 16 February 2023.
Trust propagation is conditioned to a maximum radius of 3 hops in the trust graph, assuming that each stakeholder trusts herself with a ( 1 , 1 ) opinion. Following the model of [14], we propagate trust from A to B as follows: opinions along paths from A to B are multiplied, while opinions across paths from A to B are summed. We employ the multiplication (⊗) and sum (⊕) operations of Figure 15, implemented as in [12] via α -Problog.
Trust relations are checked between the application operator and the node operator where functions are placed. Analogously, trust relations are checked between the application operator and external service providers. Relying on α -Problog, each output eligible placement is annotated with its overall trust assessment computed as described above. Figure 16 shows the trust/3 predicate used to propagate trust opinions during the placement phase. For every pair of stakeholders A and B it looks for a direct trust opinion (lines 40–42). If such an opinion is not declared, the predicate looks for an indirect opinion from stakeholders C, with at most a distance of D opinions from A to B (lines 43–47). The trust/3 predicate is used during the placement in the getNode/3 predicate (line 35) and in the bindServices/5 predicate (line 39) to assess the trust toward node and service providers.
Example 10.
Considering the trust network created by stakeholders’ opinions of Figure 4 (Section 3), where links are annotated with (Trust, Confidence) values. Assuming to have 9 eligible placements for our application, as the above mentioned (Figure 14), we now have an annotation for each placement.
Table 2 lists the trust and confidence values associated with the outputted placements P 1 P 9 . Trust assessment allows application operators to select one (or more) best candidate placement(s), as well as to set a minimum trust level to meet. For instance, blindly choosing the first result P 1 of Table 2 actually leads to selecting one of the placements which can be trusted less (viz. (0.27, 0.23)) among the eligible ones. By considering trust assessment, the application operator will instead likely choose P 4 , with the best trust level (viz. (0.77, 0.48)).

5. Experimental Assessment

In this section, we report the experimental assessment of SecFaaS2Fog concerning its usability and the performance impact of the padding technique. In particular, our goal is to answer the following questions:
Q1 
What is the execution time of SecFaaS2Fog in relation to the stringent latency constraints of Cloud-Edge settings?
Q2 
How much is the impact of the padding technique in terms of placement time?
Q3 
How much is the impact of the padding technique in finding an eligible placement?
Q4 
How much is the impact of the padding technique in terms of infrastructure energy consumption?
To answer those questions, we run 1400 experiments using a prototype tool able to simulate the deployment and the execution of FaaS orchestrations in the Cloud-Edge integrated with SecFaaS2Fog for the placement phase. Those simulations measure the execution time of SecFaaS2Fog and the energy consumption of the infrastructure nodes.
All the experiments were executed on a machine with the processor Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz, counting 12 vCPUs, 32 GB of RAM and 50 GB of storage with Ubuntu 20.04.5 LTS as the Operating System.

5.1. Background: λ FogSim

The tool exploited for the experimentation is λ FogSim (Available at https://github.com/di-unipi-socc/LambdaFogSim, accessed on 16 February 2023), a discrete-time simulator of deployment and execution of FaaS orchestrations in the Cloud-Edge continuum.
The simulator accepts infrastructures and orchestrations as per Prolog declarations of Section 4.1 and it queries SecFaaS2Fog to find eligible placements of the application. It is also possible to randomly generate infrastructures, by indicating templates of nodes, event generators, services and links, and by selecting the size of the overall infrastructure. Moreover, λ FogSim follows a simple yet meaningful threshold-based model [20,21] for energy consumption where it is assumed a certain consumption level when the CPU is idle and an increased consumption level when the workload of a device exceeds a specific threshold.
Considering a node n, the simulator first computes its workload:
l o a d ( n ) = 0.7 · % used   vCPUs + 0.3 · % occupied   RAM
then its energy consumption:
c o n s u m p t i o n ( l o a d ) = l o w l o a d threshold h i g h l o a d > threshold
where l o w and h i g h are configurable consumption values in kWh and threshold is the configurable load value to discriminate between the two consumption levels.
The main parameters of a simulation are the
  • overall duration specified as the number of simulation epochs,
  • the probability to generate events during an epoch,
  • the maximum admitted time for single placement search,
  • energy consumption threshold and high, low load values,
  • using or not the padding for placing an orchestration, and
  • the seed of the random number generator, in order to make a simulation repeatable.
λ FogSim can also simulate the crash of nodes and links of the infrastructure in order to trigger the re-deployment of applications, but in our experiments we decided to deactivate this feature as it is not relevant for our purposes.
For every epoch, events are randomly generated as per the configured probability and every triggered orchestration is deployed using SecFaaS2Fog for the decision-making of placements. During an epoch, the execution of deployed functions is simulated by allocating and releasing infrastructure resources. The execution time of every query to SecFaaS2Fog is measured, and the search is stopped whenever it exceeds the defined maximum admitted time.
The final output of the simulation is a report containing all the events, the placement attempts and the execution time of SecFaaS2Fog. For every epoch, the energy consumption of each infrastructure node is reported.

5.2. Setting the Stage

As aforementioned, SecFaaS2Fog main query performs a search of an eligible placement for an orchestration onto the target architecture resolving the service bindings. Exploiting the Prolog engine, SecFaaS2Fog explores the full search space combinatorially, returning a different solution for every query. This brings two problems:
(1)
The time taken for answering a query in a large search space can be very high,
(2)
There are branches of the search space that are explored even when will not bring a solution, e.g., resolving the service binding for a placement that will not be eligible.
To avoid these two problems, an optimised version of SecFaaS2Fog was developed to accept as input parameter the maximum time admitted to run a query and to search only one placement.
To run the experiments were considered three different FaaS orchestrations:
  • The AR application orchestration of the motivating example of Section 3,
  • A Media Processing orchestration, which has the basic architecture depicted in Figure 17a, and
  • A Stock Market orchestration, which has the basic architecture depicted in Figure 17b.
Both the Media Processing orchestration and the Stock Market orchestration are introduced to increase the diversity in the application deployment, allowing to trigger different orchestrations, occupying the resources with different functions and looking for different service bindings. The Prolog definition of the Media Processing orchestration is available at https://github.com/di-unipi-socc/LambdaFogSim/blob/main/examples/applications/media_processing.pl, accessed on 16 February 2023. The Prolog definition of the Stock Market orchestration is available at https://github.com/di-unipi-socc/LambdaFogSim/blob/main/examples/applications/stock_market.pl, accessed on 16 February 2023.
Table 3 shows the settings of the experiments. The number of different seeds for the random generation of the simulation was 20 and for every seed two simulations were executed, one with padding and one without padding, for a total of 40 simulations. Each simulation was executed for 200 epochs, setting the maximum execution time of SecFaaS2Fog to 1 second to give a high time margin in finding an eligible placement. Starting a simulation with the same configuration and the same seed guarantees the same series of events during the simulation, up to a divergence given by the successful placement.
The Cloud-Edge infrastructures used in the experiments were generated randomly by λ FogSim with the number of nodes fixed from 50 to 220 with a 5 step, for a total of 35 different infrastructures. The number of event generators and services was decided randomly from bounds increased with the number of nodes. For instance, with 50 nodes the number of event generators was chosen between 2 and 6. With 100 nodes the number of event generators was chosen between 4 and 12. We tried to restrict the resource availability of the infrastructures to increase the difficulty of finding eligible placements for orchestrations triggered closer in time, in order to study the effectiveness of SecFaaS2Fog in such kind of scenario.
Concerning the energy consumption configuration, for every node we chose a low consumption level of 0.2 kWh, meaning that the standard workload of a node has that consumption value. When functions are placed on a node, the load of the node is increased as per the λ FogSim energy model, and we chose 0.5 as the load threshold to consider the node at high consumption level, set at 0.4 kWh.
The total number of simulations is 1400 given by the number of seeds (20) used with and without padding (2) for each infrastructure size (35).
All the experiments were executed sequentially and we collected all the data from the output reports of λ FogSim.
For each infrastructure size, we extracted the following metrics:
  • average total time in milliseconds (ms), given by the sum of the execution time of every query to SecFaaS2Fog divided by the total amount of queries,
  • average success time in milliseconds, given by the sum of the execution time of every query to SecFaaS2Fog that found an eligible placement divided by the total amount of queries that found an eligible placement, and
  • average success ratio in percentage (%), given by the number of queries to SecFaaS2Fog that found an eligible placement divided by the total amount of queries that found an eligible placement multiplied by 100,
  • average energy consumption in kilowatts-hour (kWh), given by summing the total energy consumption of every single simulation and dividing by the number of epochs.

5.3. Experimental Results

We start by showing the average total time of every SecFaaS2Fog execution for every infrastructure, separating the simulations using the padding from the ones which do not use it. Figure 18 shows the average execution time in milliseconds (y-axis) at varying infrastructure size in the number (#) of nodes (x-axis).
This first plot shows that the average execution time of SecFaaS2Fog appears to be very unstable both for placements with and without padding, spanning from about 60 ms to almost 180 ms. Note that in some cases, the time of the experiment with padding is lower than the experiments that use it. As discussed later, this is due to the time needed for a query to fail. The instability of the plots is given by the randomness of the generated infrastructures. SecFaaS2Fog explores the combinatorial search space, and with some infrastructures, such a search space might be very large.
The situation is different if we consider the average success time given by the successful placements only. Figure 19 shows the average execution time in milliseconds of the query that found an eligible placement (y-axis) at varying infrastructure size in the number (#) of nodes (x-axis).
There is linear growth of the execution time with the infrastructure size, with a minimum of about 6 ms for the infrastructure with 50 nodes and reaching about 31 ms for the infrastructure with 220 nodes.
Note the impact of the padding on the execution time. For infrastructures with less than 100 nodes, the average time of the padding versus not padding experiments is so small that there is no significant performance decrease given by the padding solution. For infrastructure with more the 100 nodes, the padding has clearly a cost in terms of execution time but the overhead is always under 10%. As before, there are infrastructures where the time of experiments without padding is higher than the experiments using the padding. This is due to the difference in the number of successful placements because the higher the number of successful placements the thiger the execution times.
For a deeper investigation, we collected the average success time grouped by orchestration. For every orchestration, Figure 20 shows the average execution time in milliseconds of the query that found an eligible placement (y-axis) at varying infrastructure size in the number (#) of nodes (x-axis).
Both the media processing and the stock market orchestration have the time of the version with and without padding almost completely overlapped, suggesting that the conditional branches of the two orchestrations are padded very quickly. Regarding the AR orchestration, there is a clear impact of the padding when the infrastructure grows over 90 nodes, which settles around 18% of the execution time. This is due to the larger search spaces given by the growth of the infrastructure. The overall execution time follows the total average behaviour, it has a growth almost linear in the infrastructure size, with a peek of 30 ms on the biggest infrastructure for all the cases but the padding of the AR orchestration, which has a 35 ms of execution time, confirming the overhead of the padding.
Concerning the search for eligible placements, we show the data about the average success ratio. Figure 21 shows the percentage of successful searches (y-axis) at varying infrastructure size in the number (#) of nodes (x-axis).
This plot also shows the instability given by the randomness of the infrastructures. It is particularly clear for the infrastructures with 50, 75 and 120 nodes, which record very low percentages of successes, respectively 28%, 35% and 35%. Overall there is a growth in the success percentage with the growth of the infrastructure, with a success rate between 50% and 60% for infrastructures with less than 130 nodes, a success rate between 60% and 70% for infrastructures with a number of nodes between 130 and 180, and overcoming 70% with the biggest infrastructures. This behaviour gives the intuition that the larger the infrastructure the higher the probability to find a placement, having more resources available and more services for the binding.
For every orchestration, Figure 22 shows the percentage of successful searches (y-axis) at varying infrastructure size in the number (#) of nodes (x-axis).
Both the Media Processing and the Stock Market orchestrations have a behaviour similar to the total average case. The main difference is the percentage values of the Stock Market case, where the success rate is about 10% higher in almost all the infrastructures. The AR orchestration shows an almost linear growth of the success percentage with the increase of the infrastructure size, suggesting that this orchestration has stringent requirements and needs high resource availability to find an eligible placement.
Finally, Figure 23 shows the energy consumption of the infrastructure (y-axis) at varying infrastructure size in the number (#) of nodes (x-axis).
A green dotted line indicates the basic energy consumption of the infrastructure at a low workload level to show how the placements increase the workload and therefore the energy consumption. As expected, the energy consumption grows with the infrastructure size, given that also nodes not involved in the placement consume a low level of energy. The smallest infrastructure, with 50 nodes, consumes about 11 kWh. The biggest one, with 220 nodes, consumes about 51 kWh. Even the distance from the basic workload increases with the infrastructure size. The smallest infrastructure consumes about half kWh more than the basic workload. The biggest infrastructure consumes about 7 kWh more than the basic workload.

5.4. Discussion of Results

To sum up, to answer the question
  • Q1: What is the execution time of SecFaaS2Fog in relation to the stringent latency constraints of Cloud-Edge settings?
We can advocate that SecFaaS2Fog is efficient in finding an eligible placement but it can be slow to answer when there is no eligible placement. From comparing the results of Figure 18 and Figure 19, it emerges that SecFaaS2Fog has a good execution time when there is a placement solution. Indeed, the total average execution time is clearly increased by the time needed for the placement search to fail. There are two reasons why this happens (i) after exploring all the search space, SecFaaS2Fog does not find a solution, or (ii) the maximum execution time of 1 second is reached. It makes sense to cap the execution time to 1 second since, when such a threshold is exceeded, most probably no eligible solution exists for the considered input.
The average execution time of SecFaaS2Fog is between 60 ms and 180 ms. Considering the average execution time of successful placement, it grows as the infrastructure size grows, varying from a minimum of 6 ms and a maximum of 31 ms.
Concerning the question
  • Q2: How much is the impact of the padding technique in terms of placement time?
The results show that using the padding to protect an orchestration from the Control-flow leaks impacts the placement time of less than 10% on average in comparison with the execution time of SecFaaS2Fog without using the padding technique. As expected, the padding overhead is strongly influenced by the orchestration requirements. Indeed, Figure 20 shows that two orchestrations out of three have almost no difference in the time needed to find a placement with or without the padding. Instead, the AR application suffers a delay in the search for an eligible placement as the infrastructure size grows, due to the increased search space, reaching almost a 20% difference.
Summing up, the overhead of the padding is influenced by both the infrastructure size and the orchestration requirements, becoming almost negligible in some cases but very significant in others, and settling on average around 10%.
The average time overhead of the padding technique is 10% in case of successful placement, with a minimum 0% and a maximum of 20%, and it grows as the infrastructure size grows.
Regarding question
  • Q3: How much is the impact of the padding technique in finding an eligible placement?
To find an eligible placement there must be available resources close—in latency—to the event generator that triggered the orchestration. Depending on the infrastructure structure this can be difficult, in our case with randomly generated infrastructures.
Figure 21 shows how the padding, generally, lowers the success rate of placement requests. In most cases, the padding line is under the line of the requests that do not use it, especially with bigger infrastructures, where the difference can be even of the 10%. This indicates that the successful placement of non-padded orchestration exploits better the larger search space.
Looking at the single orchestrations, the Media processing and Stock Market ones follow the general behaviour of the average plot of the success rate. The different one is the AR orchestration, where the padding impacts less in finding a successful placement, especially with small infrastructures, with less than 150 nodes, where the success rate difference is almost never more than 5%. When the infrastructure grows, the success rate of the placements without padding grows more than the ones with it, and there are cases where the difference in the success ratio is over 10%.
To sum up, using the padding reduces the possibility of finding an eligible placement. This fact is expected, as a padded orchestration has increased requirements in comparison with a non-padded one, and the constraints of the eligible placements are increased as well. In situations with scarce resources available, the padding can affect the success of finding an eligible placement, especially with infrastructures of big size.
The success ratio of the placement search has an influence on the execution time of SecFaaS2Fog. As discussed, the execution time increase when the prototype fails to find an eligible placement. Given the results of the experiment, we can conclude that SecFaaS2Fog is a viable solution for finding eligible placements when the resources available of infrastructures are enough for the deployment of a FaaS orchestration, with some limitations on infrastructure with big size whether the response time must be very low. In some cases, choosing the maximum time of response is a good solution but it needs an assessment of the execution time for a specific orchestration on a specific infrastructure.
The average impact of the padding technique in finding an eligible placement is about 10% with a minimum 0% and a maximum of 15%, and it grows as the infrastructure size grows.
Finally, we answer question
  • Q4: How much is the impact of the padding technique in terms of infrastructure energy consumption?
To discuss energy consumption, we have to consider also the success ratio of the placements. Indeed, when a placement query fails to find a placement the load on the infrastructure is not increased. This is the reason why some infrastructures have a lower consumption when the padding is used in comparison with the simulations when the padding is not used. For instance, the infrastructure with 210 nodes has the largest difference in the success rate of the padding of all the experiments (Figure 21) and the energy consumption of the non-padded orchestrations is lower in comparison with the padded ones.
Without considering those limit cases, the simulations using the padding consume a little more energy in comparison with the ones without the padding. This is expected, the padding increases also the hardware requirements of the functions of the conditional branches, allocating more hardware resources during the placement. The higher the allocated hardware resources, the higher the workload on the nodes hosting functions and, therefore, the more energy consumption.
The good aspect is that the impact of the padding on the overall energy consumption is almost negligible. The energy consumption with the padding is almost always the same without it. There is a significant increment only for some infrastructures where the success ratio of the placement of the padded orchestration is almost equal to the non-padded ones.
The average impact of the padding technique in terms of energy consumption is about 1% with a minimum 0% and a peek of 10%, remaining stable as the infrastructure size grows.

6. Related Work

The problem of placing, offloading or scheduling application services/functions onto Cloud-Edge resources has been thoroughly studied in recent years [22,23,24]. Various solutions have been proposed to tackle this problem by means of mathematical optimisation (e.g., [25,26,27]), heuristic search (e.g., [28,29,30] ), or machine learning (e.g., [31,32,33]).
Speaking of declarative approaches to resource management, some authors have proposed to manage Cloud resources (e.g., [34]) and to improve network usage (e.g., [35]). Recently, we employed α -Problog prototypes to assess the security and trust levels of different multiservice application placements [12] and to securely place VNF chains and steer traffic across them [36]. The taxonomy used to express security capabilities was introduced in [12], where trust relations modelled via semirings were also introduced. Differently from SecFaaS2Fog, Ref. [12] determines placements of multiservice applications (i.e., not FaaS-based) without considering information flow security, external service interactions, nor hardware and software requirements of services to be placed. Focussing on Software-defined Networking (SDN) domains, Ref. [36] considers neither trust relations nor information flow security, but only security requirements as AND-OR combinations of the taxonomy elements. Related to these, Ref. [37] devises a (non-declarative) solution to the problem of placing applications over heterogeneous servers, while configuring hardware/software security controls. Also, Ref. [37] does not consider information-flow security.
Targeting FaaS architectures, [38] dynamically decides whether to run a serverless function within the local Edge network or in the Cloud, based on monitored data. An Edge proxy is in charge of making such a decision, also considering network failures. On the same line, [39] present an edge-based framework to dynamically decide whether to execute a function in the Cloud or locally, by estimating operational costs and improving end-to-end latencies. Similarly, [40] present a serverless monitoring and scheduling system to select FaaS providers, based on average execution time, affinity constraints, and costs—possibly considering user-defined scheduling policies. Still aiming at reducing latencies, Cho et al. [41] discuss a solution to distribute FaaS tasks over hierarchical Fog infrastructures, by employing a token bucket algorithm and reinforcement learning to optimise workload distribution and response times.
Cicconetti et al. [42] propose an architecture to realise serverless computing SDN scenarios where network routers assign function execution to edge devices based on arbitrary costs (e.g., latency, bandwidth, energy). The infrastructure is monitored and updated by SDN controllers, and different strategies try to optimise operational costs and load-balancing. More recently, Ref. [43] discuss a Kubernetes-based scheduler to optimise the placement of FaaS in the Cloud-IoT continuum based on a linear combination of proximity to image registries and to data producers, available node resources and Edge or Cloud locality. Ref. [44] take an infrastructure provider’s perspective to FaaS placement in the Fog, using distributed auctions. Programmers submit functions to target nodes along with a resource bid, based on which nodes decide whether to store and run functions.
To the best of our knowledge, only [45,46] exploit information flow security to check that no leak is present in FaaS orchestrations, at runtime. Differently from SecFaaS2Fog, security types are not exploited to determine eligible FaaS placements. Thus, none of the previously proposed approaches considers information-flow security, or infrastructure security countermeasures when performing latency-aware placement of FaaS orchestrations in the Cloud-IoT continuum.

7. Concluding Remarks and Future Work

This article introduced a declarative modelling of FaaS orchestrations and an information-flow-aware methodology for their placement onto the Cloud-Edge continuum, implemented in the Prolog prototype SecFaaS2Fog.
We introduced the attacker model of our considered scenario and three possible data leaks that can compromise the data confidentiality of an arbitrary FaaS orchestration running on Cloud-IoT nodes. Then, we showed how we employ information-flow security and padding to tackle such attacks, highlighting how the security of FaaS orchestrations can be enhanced through suitable placement decisions at deployment time. Finally, we discuss the execution time of SecFaaS2Fog and the impact of the padding technique through experimental simulations of placement and execution of FaaS orchestrations in the Cloud-Edge continuum.
To conclude, we indicate three possible directions of future work for our methodology.
  • Energy-aware placement During the experimental assessment of SecFaaS2Fog we showed the energetic consumption overhead of our approach exploiting the energy model of the tool used to run the simulations. Due to the global growth of interest in energetic consumption to reduce the CO 2 emission, including energy consumption in the decision-making of the deployment is surely an intriguing line of work. To this extent, we could improve the model of our methodology to determine the energy requirements of FaaS orchestration and the energy consumption of the infrastructure. Based on them, we should include estimations or optimisation of the energy consumption in the process of determining eligible placements.
  • History-aware placement Another interesting line of future work to explore is exploiting serverless functions already placed on infrastructure nodes could reduce deployment time and resource usage. Indeed, serverless functions are stateless and they are able to execute their task independently of whatever orchestration instance they are part of. To exploit this situation, we should extend our methodology to consider functions already deployed and study carefully the security implication of re-use them, e.g., The security context of a function could be different if considered part of another orchestration.
  • Improving the orchestration language Our considered orchestration language allows combining serverless functions via sequence, conditional and parallel constructs. We plan to extend the expressiveness of such language by introducing other programming language constructs (e.g., loops, try-catch), allowing the modelling of a larger set of FaaS orchestrations and also considering workflow-based languages employed by FaaS providers (e.g., AWS Step Functions).

Author Contributions

Conceptualization, A.B. (Alessandro Bocci), S.F., G.-L.F. and A.B. (Antonio Brogi); methodology, A.B. (Alessandro Bocci), S.F., G.-L.F. and A.B. (Antonio Brogi); software, A.B. (Alessandro Bocci) and S.F.; validation, A.B. (Alessandro Bocci); formal analysis, A.B. (Alessandro Bocci), S.F., G.-L.F. and A.B. (Antonio Brogi); investigation, A.B. (Alessandro Bocci), S.F., G.-L.F. and A.B. (Antonio Brogi); resources, A.B. (Antonio Brogi); data curation, A.B. (Alessandro Bocci); writing—original draft preparation, A.B. (Alessandro Bocci) and S.F.; writing—review and editing, A.B. (Alessandro Bocci), S.F., G.-L.F. and A.B. (Antonio Brogi); visualization, A.B. (Alessandro Bocci); supervision, S.F., G.-L.F. and A.B. (Antonio Brogi); project administration, A.B. (Antonio Brogi) and G.-L.F.; funding acquisition, S.F., G.-L.F. and A.B. (Antonio Brogi). All authors have read and agreed to the published version of the manuscript.

Funding

This research has been partly funded by projects Energy-aware management of software applications in Cloud-IoT ecosystems (RIC2021PON_A18), funded with ESF REACT-EU resources by the Italian Ministry of University and Research through the PON Ricerca e Innovazione 2014–20 and hOlistic Sustainable Management of distributed softWARE systems (OSMWARE), UNIPI PRA_2022_64, funded by the University of Pisa, Italy.

Data Availability Statement

The data that support the experiments of this article are openly available in the SecFaaS2Fog repository at https://github.com/di-unipi-socc/SecFaaS2Fog/tree/main/experiments, accessed on 16 February 2023.

Acknowledgments

We would like to thank Alessio Matricardi, a graduate student in Computer Science at the University of Pisa, for the development of λ FogSim as part of his bachelor thesis under our supervision.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bocci, A.; Forti, S.; Ferrari, G.L.; Brogi, A. Secure FaaS orchestration in the fog: How far are we? Computing 2021, 103, 1025–1056. [Google Scholar] [CrossRef]
  2. Baresi, L.; Mendonça, D.F. Towards a Serverless Platform for Edge Computing. In Proceedings of the IEEE International Conference on Fog Computing (ICFC 2019), Prague, Czech Republic, 24–26 June 2019; pp. 1–10. [Google Scholar]
  3. Großmann, M.; Ioannidis, C.; Le, D.T. Applicability of Serverless Computing in Fog Computing Environments for IoT Scenarios. In Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion, Auckland, New Zealand, 2–5 December 2019; pp. 29–34. [Google Scholar]
  4. Bonomi, F.; Milito, R.; Natarajan, P.; Zhu, J. Fog computing: A platform for internet of things and analytics. In Big Data and Internet of Things: A Roadmap for Smart Environments; Springer: Berlin/Heidelberg, Germany, 2014; pp. 169–186. [Google Scholar]
  5. Habibi, P.; Farhoudi, M.; Kazemian, S.; Khorsandi, S.; Leon-Garcia, A. Fog Computing: A Comprehensive Architectural Survey. IEEE Access 2020, 8, 69105–69133. [Google Scholar] [CrossRef]
  6. Mahmud, R.; Srirama, S.N.; Ramamohanarao, K.; Buyya, R. Quality of Experience (QoE)-aware placement of applications in Fog computing environments. J. Parallel Distrib. Comput. 2019, 132, 190–203. [Google Scholar] [CrossRef]
  7. Guerrero, C.; Lera, I.; Juiz, C. Evaluation and efficiency comparison of evolutionary algorithms for service placement optimization in fog architectures. Future Gener. Comput. Syst. 2019, 97, 131–144. [Google Scholar] [CrossRef]
  8. Brogi, A.; Forti, S.; Ibrahim, A. Optimising QoS-assurance, Resource Usage and Cost of Fog Application Deployments. In Proceedings of the CLOSER (Selected Papers), CCIS, Porto, Portugal, 26–28 July 2018; Volume 1073, pp. 168–189. [Google Scholar]
  9. Raghavendra, M.S.; Chawla, P. A review on container-based lightweight virtualization for fog computing. In Proceedings of the International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), Noida, India, 29–31 August 2018; pp. 378–384. [Google Scholar]
  10. Pfandzelter, T.; Bermbach, D. tinyFaaS: A lightweight faas platform for edge environments. In Proceedings of the 2020 IEEE International Conference on Fog Computing (ICFC), Sydney, NSW, Australia, 21–24 April 2020; pp. 17–24. [Google Scholar]
  11. Vaquero, L.M.; Cuadrado, F.; Elkhatib, Y.; Bernal-Bernabe, J.; Srirama, S.N.; Zhani, M.F. Research challenges in nextgen service orchestration. Future Gener. Comput. Syst. 2019, 90, 20–38. [Google Scholar] [CrossRef] [Green Version]
  12. Forti, S.; Ferrari, G.L.; Brogi, A. Secure Cloud-Edge Deployments, with Trust. Future Gener. Comput. Syst. 2020, 102, 775–788. [Google Scholar] [CrossRef]
  13. Bocci, A.; Forti, S.; Ferrari, G.L.; Brogi, A. Type, pad, and place: Avoiding data leaks in Cloud-IoT FaaS orchestrations. In Proceedings of the 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid 2022), Taormina, Italy, 16–19 May 2022; pp. 798–805. [Google Scholar] [CrossRef]
  14. Bistarelli, S.; Foley, S.N.; O’Sullivan, B.; Santini, F. Semiring-based frameworks for trust propagation in small-world networks and coalition formation criteria. Secur. Commun. Netw. 2010, 3, 595–610. [Google Scholar] [CrossRef]
  15. Sabelfeld, A.; Myers, A.C. Language-based information-flow security. IEEE J. Sel. Areas Commun. 2003, 21, 5–19. [Google Scholar] [CrossRef] [Green Version]
  16. Sabelfeld, A.; Sands, D. A Per Model of Secure Information Flow in Sequential Programs. High. Order Symb. Comput. 2001, 14, 59–91. [Google Scholar] [CrossRef]
  17. Pottier, F.; Skalka, C.; Smith, S.F. A systematic approach to static access control. ACM Trans. Program. Lang. Syst. 2005, 27, 344–382. [Google Scholar] [CrossRef]
  18. Kimmig, A.; Van den Broeck, G.; De Raedt, L. An algebraic Prolog for reasoning about possible worlds. In Proceedings of the AAAI, San Francisco, CA, USA, 7–11 August 2011. [Google Scholar]
  19. Bocci, A.; Forti, S.; Ferrari, G.L.; Brogi, A. Placing FaaS in the Fog, Securely. In Proceedings of the Italian Conference on Cybersecurity, Online, 7–9 April 2021; Volume 2940, pp. 166–179. [Google Scholar]
  20. Armenta-Cano, F.; Tchernykh, A.; Cortés-Mendoza, J.M.; Yahyapour, R.; Drozdov, A.Y.; Bouvry, P.; Kliazovich, D.; Avetisyan, A. Heterogeneous job consolidation for power aware scheduling with quality of service. In Proceedings of the Supercomputing Days in Russia, Moscow, Russia, 28–29 September 2015; pp. 687–697. [Google Scholar]
  21. Beloglazov, A.; Abawajy, J.H.; Buyya, R. Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing. Future Gener. Comput. Syst. 2012, 28, 755–768. [Google Scholar] [CrossRef] [Green Version]
  22. Brogi, A.; Forti, S.; Guerrero, C.; Lera, I. How to place your apps in the fog: State of the art and open challenges. Softw. Pract. Exp. 2020, 50, 719–740. [Google Scholar] [CrossRef] [Green Version]
  23. Salaht, F.A.; Desprez, F.; Lebre, A. An overview of service placement problem in fog and edge computing. ACM Comput. Surv. 2020, 53, 1–35. [Google Scholar] [CrossRef]
  24. Mahmud, R.; Ramamohanarao, K.; Buyya, R. Application management in fog computing environments: A taxonomy, review and future directions. ACM Comput. Surv. 2020, 53, 1–43. [Google Scholar] [CrossRef]
  25. Pallewatta, S.; Kostakos, V.; Buyya, R. QoS-aware placement of microservices-based IoT applications in Fog computing environments. Future Gener. Comput. Syst. 2022, 131, 121–136. [Google Scholar] [CrossRef]
  26. Venticinque, S.; Amato, A. A methodology for deployment of IoT application in fog. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 1955–1976. [Google Scholar] [CrossRef]
  27. Skarlat, O.; Nardelli, M.; Schulte, S.; Dustdar, S. Towards QoS-Aware Fog Service Placement. In Proceedings of the 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC), Madrid, Spain, 14–15 May 2017; pp. 89–96. [Google Scholar]
  28. Baranwal, G.; Vidyarthi, D.P. TRAPPY: A truthfulness and reliability aware application placement policy in fog computing. J. Supercomput. 2022, 78, 7861–7887. [Google Scholar] [CrossRef]
  29. Brogi, A.; Forti, S.; Ibrahim, A. Optimising QoS-assurance, resource usage and cost of fog application deployments. In Proceedings of the Cloud Computing and Services Science—8th International Conference (CLOSER 2018), Funchal, Portugal, 19–21 March 2018; pp. 168–189. [Google Scholar]
  30. Taneja, M.; Davy, A. Resource aware placement of IoT application modules in Fog-Cloud Computing Paradigm. In Proceedings of the 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM), Lisbon, Portugal, 8–12 May 2017; pp. 1222–1228. [Google Scholar]
  31. Yang, H.; Yuan, J.; Li, C.; Zhao, G.; Sun, Z.; Yao, Q.; Bao, B.; Vasilakos, A.V.; Zhang, J. BrainIoT: Brain-Like Productive Services Provisioning With Federated Learning in Industrial IoT. IEEE Internet Things J. 2022, 9, 2014–2024. [Google Scholar] [CrossRef]
  32. Sun, Z.; Yang, H.; Li, C.; Yao, Q.; Wang, D.; Zhang, J.; Vasilakos, A.V. Cloud-Edge Collaboration in Industrial Internet of Things: A Joint Offloading Scheme Based on Resource Prediction. IEEE Internet Things J. 2022, 9, 17014–17025. [Google Scholar] [CrossRef]
  33. Cai, S.; Wang, D.; Wang, H.; Lyu, Y.; Xu, G.; Zheng, X.; Vasilakos, A.V. DynaComm: Accelerating Distributed CNN Training Between Edges and Clouds Through Dynamic Communication Scheduling. IEEE J. Sel. Areas Commun. 2022, 40, 611–625. [Google Scholar] [CrossRef]
  34. Kadioglu, S.; Colena, M.; Sebbah, S. Heterogeneous resource allocation in Cloud Management. In Proceedings of the 15th IEEE International Symposium on Network Computing and Applications (NCA 2016), Boston, MA, USA, 31 October–2 November 2016; pp. 35–38. [Google Scholar]
  35. Hinrichs, T.L.; Gude, N.S.; Casado, M.; Mitchell, J.C.; Shenker, S. Practical declarative network management. In Proceedings of the 1st ACM SIGCOMM 2009 Workshop on Research on Enterprise Networking (WREN 2009), Barcelona, Spain, 20–21 August 2009; pp. 1–10. [Google Scholar]
  36. Forti, S.; Paganelli, F.; Brogi, A. Probabilistic QoS-aware Placement of VNF chains at the Edge. Theory Pract. Log. Program. 2022, 22, 1–36. [Google Scholar] [CrossRef]
  37. Mann, Z.Á. Secure software placement and configuration. Future Gener. Comput. Syst. 2020, 110, 243–253. [Google Scholar] [CrossRef]
  38. Pinto, D.; Dias, J.P.; Ferreira, H.S. Dynamic Allocation of Serverless Functions in IoT Environments. In Proceedings of the 16th IEEE International Conference on Embedded and Ubiquitous Computing (EUC 2018), Bucharest, Romania, 29–31 October 2018; Dobre, C., Melero, F.J., Ciobanu, R., Palmieri, F., Eds.; IEEE Computer Society: New York, NY, USA, 2018; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  39. Das, A.; Imai, S.; Wittie, M.P.; Patterson, S. Performance Optimization for Edge-Cloud Serverless Platforms via Dynamic Task Placement. In Proceedings of the 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID), Melbourne, VIC, Australia, 11–14 May 2020. [Google Scholar]
  40. Aske, A.; Zhao, X. Supporting Multi-Provider Serverless Computing on the Edge. In Proceedings of the The 47th International Conference on Parallel Processing (ICPP 2018), Eugene, OR, USA, 13–16 September 2018; pp. 20:1–20:6. [Google Scholar] [CrossRef]
  41. Cho, C.; Shin, S.; Jeon, H.; Yoon, S. QoS-Aware Workload Distribution in Hierarchical Edge Clouds: A Reinforcement Learning Approach. IEEE Access 2020, 8, 193297–193313. [Google Scholar] [CrossRef]
  42. Cicconetti, C.; Conti, M.; Passarella, A. A Decentralized Framework for Serverless Edge Computing in the Internet of Things. IEEE Trans. Netw. Serv. Manag. 2020, 18, 2166–2180. [Google Scholar] [CrossRef]
  43. Rausch, T.; Rashed, A.; Dustdar, S. Optimized container scheduling for data-intensive serverless edge computing. Future Gener. Comput. Syst. 2021, 114, 259–271. [Google Scholar] [CrossRef]
  44. Bermbach, D.; Maghsudi, S.; Hasenburg, J.; Pfandzelter, T. Towards Auction-Based Function Placement in Serverless Fog Platforms. In Proceedings of the 2020 IEEE International Conference on Fog Computing, ICFC 2020, Sydney, NSW, Australia, 21–24 April 2020; pp. 25–31. [Google Scholar] [CrossRef]
  45. Alpernas, K.; Flanagan, C.; Fouladi, S.; Ryzhyk, L.; Sagiv, M.; Schmitz, T.; Winstein, K. Secure serverless computing using dynamic information flow control. Proc. ACM Program. Lang. 2018, 2, 118:1–118:26. [Google Scholar] [CrossRef] [Green Version]
  46. Datta, P.; Kumar, P.; Morris, T.; Grace, M.; Rahmati, A.; Bates, A. Valve: Securing Function Workflows on Serverless Computing Platforms. In Proceedings of the WWW ’20 Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; Huang, Y., King, I., Liu, T., van Steen, M., Eds.; ACM/IW3C2: New York, NY, USA, 2020; pp. 939–950. [Google Scholar] [CrossRef]
Figure 1. Security lattices example. (a) Total order security lattice; (b) Partial order security lattice.
Figure 1. Security lattices example. (a) Total order security lattice; (b) Partial order security lattice.
Electronics 12 01332 g001
Figure 2. Use case FaaS application.
Figure 2. Use case FaaS application.
Electronics 12 01332 g002
Figure 3. Use case infrastructure.
Figure 3. Use case infrastructure.
Electronics 12 01332 g003
Figure 4. Example trust network.
Figure 4. Example trust network.
Electronics 12 01332 g004
Figure 5. The secfaas2fog/2 predicate.
Figure 5. The secfaas2fog/2 predicate.
Electronics 12 01332 g005
Figure 6. The typing/3 predicate.
Figure 6. The typing/3 predicate.
Electronics 12 01332 g006
Figure 7. The typing/5 predicate.
Figure 7. The typing/5 predicate.
Electronics 12 01332 g007
Figure 8. Typed AR orchestration (top = green, medium = orange, low = red).
Figure 8. Typed AR orchestration (top = green, medium = orange, low = red).
Electronics 12 01332 g008
Figure 9. The paddingIf/4 predicate.
Figure 9. The paddingIf/4 predicate.
Electronics 12 01332 g009
Figure 10. The pad/4 predicate.
Figure 10. The pad/4 predicate.
Electronics 12 01332 g010
Figure 11. Padding of the conditional branch.
Figure 11. Padding of the conditional branch.
Electronics 12 01332 g011
Figure 12. The placement/3 predicate.
Figure 12. The placement/3 predicate.
Electronics 12 01332 g012
Figure 13. The placement/7 predicate.
Figure 13. The placement/7 predicate.
Electronics 12 01332 g013
Figure 14. A placement of the padded AR orchestration.
Figure 14. A placement of the padded AR orchestration.
Electronics 12 01332 g014
Figure 15. Semiring ⊗ and ⊕ operations.
Figure 15. Semiring ⊗ and ⊕ operations.
Electronics 12 01332 g015
Figure 16. The trusts/3 predicate [12].
Figure 16. The trusts/3 predicate [12].
Electronics 12 01332 g016
Figure 17. Two orchestrations at work. (a) Media Processing orchestration; (b) Stock Market orchestration.
Figure 17. Two orchestrations at work. (a) Media Processing orchestration; (b) Stock Market orchestration.
Electronics 12 01332 g017
Figure 18. Average total execution times.
Figure 18. Average total execution times.
Electronics 12 01332 g018
Figure 19. Average total time of successful placement.
Figure 19. Average total time of successful placement.
Electronics 12 01332 g019
Figure 20. Average success time grouped by orchestration.
Figure 20. Average success time grouped by orchestration.
Electronics 12 01332 g020
Figure 21. Average success ratio of placement requests.
Figure 21. Average success ratio of placement requests.
Electronics 12 01332 g021
Figure 22. Average success ratio grouped by orchestration.
Figure 22. Average success ratio grouped by orchestration.
Electronics 12 01332 g022
Figure 23. Average energy consumption of the infrastructure.
Figure 23. Average energy consumption of the infrastructure.
Electronics 12 01332 g023
Table 1. Functions requirements.
Table 1. Functions requirements.
Function IdSw RequirementsHw Requirements
f L o g i n js1GB RAM 2vCPUs 0.5GHZ
f C r o p py3, numPy2GB RAM 4vCPUs 1.2GHZ
f G e o js256MB RAM 2vCPUs 0.4GHZ
f D C C js256MB RAM 2vCPUs 0.5GHZ
f C h e c k D C C js1.6GB RAM 2vCPUs 0.5GHZ
f R u l e s py31.8GB RAM 1vCPUs 0.4GHZ
f A R py32GB RAM 4vCPUs 1.2GHZ
Table 2. Ranking of eligible placements.
Table 2. Ranking of eligible placements.
PlacementTrust
P1(0.27, 0.23)
P2(0.61, 0.31)
P3(0.27, 0.23)
P4(0.77, 0.48)
P5(0.34, 0.35)
P6(0.34, 0.31)
P7(0.17, 0.28)
P8(0.30, 0.28)
P9(0.24, 0.21)
Table 3. Resume of experiment parameters.
Table 3. Resume of experiment parameters.
ParameterValue
Epochs200
PaddingYes/No
Max placement time1 s
Number of seeds20
Number of infrastructures35
Generator activation probability0.2
Load Threshold0.5
Low consumption0.2 kWh
High consumption0.4 kWh
Total simulations1400
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bocci, A.; Forti, S.; Ferrari, G.-L.; Brogi, A. Declarative Secure Placement of FaaS Orchestrations in the Cloud-Edge Continuum. Electronics 2023, 12, 1332. https://doi.org/10.3390/electronics12061332

AMA Style

Bocci A, Forti S, Ferrari G-L, Brogi A. Declarative Secure Placement of FaaS Orchestrations in the Cloud-Edge Continuum. Electronics. 2023; 12(6):1332. https://doi.org/10.3390/electronics12061332

Chicago/Turabian Style

Bocci, Alessandro, Stefano Forti, Gian-Luigi Ferrari, and Antonio Brogi. 2023. "Declarative Secure Placement of FaaS Orchestrations in the Cloud-Edge Continuum" Electronics 12, no. 6: 1332. https://doi.org/10.3390/electronics12061332

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop