19th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS)

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (28 February 2018) | Viewed by 16315

Special Issue Editors


E-Mail Website
Guest Editor
University of Liverpool, UK
Interests: distributed-computing; self-stabilization; networks; cryptography-security; optical-computing

E-Mail Website
Guest Editor
Chalmers University of Technology, Sweden
Interests: distributed-computing; self-stabilization; networks; cryptography-security; optical-computing

Special Issue Information

Dear Colleagues,

The 19th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2017) is an international forum for researchers and practitioners on the design and development of distributed systems, with a focus on systems that are able to provide automatic recovery guarantees and robust structure, performance, and/or security in the face of an adverse operational environment.

Research in distributed systems is now at a crucial point in its evolution, marked by the importance and variety of dynamic distributed systems, such as peer-to-peer networks, large-scale sensor networks, mobile ad-hoc networks, and cloud computing. Moreover, new applications, such as grid and web services, distributed command and control, and a vast array of decentralized computations in a variety of disciplines has driven the need to ensure that distributed computations are self-stabilizing, performant, safe and secure.

The topics of interest to this Special Issue cover are in the scope of SSS 2017 (https://www.cs.bgu.ac.il/~fradmin/SSS17/index.html).

Prof. Dr. Paul Spirakis
Prof. Dr. Philippas Tsigas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 317 KiB  
Article
Cover Time in Edge-Uniform Stochastically-Evolving Graphs
by Ioannis Lamprou, Russell Martin and Paul Spirakis
Algorithms 2018, 11(10), 149; https://doi.org/10.3390/a11100149 - 2 Oct 2018
Cited by 10 | Viewed by 2977
Abstract
We define a general model of stochastically-evolving graphs, namely the edge-uniform stochastically-evolving graphs. In this model, each possible edge of an underlying general static graph evolves independently being either alive or dead at each discrete time step of evolution following a (Markovian) stochastic [...] Read more.
We define a general model of stochastically-evolving graphs, namely the edge-uniform stochastically-evolving graphs. In this model, each possible edge of an underlying general static graph evolves independently being either alive or dead at each discrete time step of evolution following a (Markovian) stochastic rule. The stochastic rule is identical for each possible edge and may depend on the past k 0 observations of the edge’s state. We examine two kinds of random walks for a single agent taking place in such a dynamic graph: (i) The Random Walk with a Delay (RWD), where at each step, the agent chooses (uniformly at random) an incident possible edge, i.e., an incident edge in the underlying static graph, and then, it waits till the edge becomes alive to traverse it. (ii) The more natural Random Walk on what is Available (RWA), where the agent only looks at alive incident edges at each time step and traverses one of them uniformly at random. Our study is on bounding the cover time, i.e., the expected time until each node is visited at least once by the agent. For RWD, we provide a first upper bound for the cases k = 0 , 1 by correlating RWD with a simple random walk on a static graph. Moreover, we present a modified electrical network theory capturing the k = 0 case. For RWA, we derive some first bounds for the case k = 0 , by reducing RWA to an RWD-equivalent walk with a modified delay. Further, we also provide a framework that is shown to compute the exact value of the cover time for a general family of stochastically-evolving graphs in exponential time. Finally, we conduct experiments on the cover time of RWA in edge-uniform graphs and compare the experimental findings with our theoretical bounds. Full article
26 pages, 373 KiB  
Article
Generalized Paxos Made Byzantine (and Less Complex)
by Miguel Pires, Srivatsan Ravi and Rodrigo Rodrigues
Algorithms 2018, 11(9), 141; https://doi.org/10.3390/a11090141 - 17 Sep 2018
Cited by 5 | Viewed by 4471
Abstract
One of the most recent members of the Paxos family of protocols is Generalized Paxos. This variant of Paxos has the characteristic that it departs from the original specification of consensus, allowing for a weaker safety condition where different processes can have a [...] Read more.
One of the most recent members of the Paxos family of protocols is Generalized Paxos. This variant of Paxos has the characteristic that it departs from the original specification of consensus, allowing for a weaker safety condition where different processes can have a different views on a sequence being agreed upon. However, much like the original Paxos counterpart, Generalized Paxos does not have a simple implementation. Furthermore, with the recent practical adoption of Byzantine fault tolerant protocols in the context of blockchain protocols, it is timely and important to understand how Generalized Paxos can be implemented in the Byzantine model. In this paper, we make two main contributions. First, we attempt to provide a simpler description of Generalized Paxos, based on a simpler specification and the pseudocode for a solution that can be readily implemented. Second, we extend the protocol to the Byzantine fault model, and provide the respective correctness proof. Full article
Show Figures

Figure 1

18 pages, 293 KiB  
Article
Relaxed Data Types as Consistency Conditions
by Edward Talmage and Jennifer L. Welch
Algorithms 2018, 11(5), 61; https://doi.org/10.3390/a11050061 - 4 May 2018
Cited by 3 | Viewed by 3737
Abstract
In the quest for higher-performance shared data structures, weakening consistency conditions and relaxing the sequential specifications of data types are two of the primary tools available in the literature today. In this paper, we show that these two approaches are in many cases [...] Read more.
In the quest for higher-performance shared data structures, weakening consistency conditions and relaxing the sequential specifications of data types are two of the primary tools available in the literature today. In this paper, we show that these two approaches are in many cases different ways to specify the same sets of allowed concurrent behaviors of a given shared data object. This equivalence allows us to use whichever description is clearer, simpler, or easier to achieve equivalent guarantees. Specifically, for three common data type relaxations, we define consistency conditions such that the combination of the new consistency condition and an unrelaxed type allows the same behaviors as Linearizability and the relaxed version of the data type. Conversely, for the consistency condition k-Atomicity, we define a new data type relaxation such that the behaviors allowed by the relaxed version of a data type, combined with Linearizability, are the same as those allowed by k-Atomicity and the original type. As an example of the possibilities opened by our new equivalence, we use standard techniques from the literature on consistency conditions to prove that the three data type relaxations we consider are not comparable to one another or to several similar known conditions. Finally, we show a particular class of data types where one of our newly-defined consistency conditions is comparable to, and stronger than, one of the known consistency conditions we consider. Full article
21 pages, 947 KiB  
Article
Computing Fault-Containment Times of Self-Stabilizing Algorithms Using Lumped Markov Chains
by Volker Turau
Algorithms 2018, 11(5), 58; https://doi.org/10.3390/a11050058 - 3 May 2018
Cited by 4 | Viewed by 4473
Abstract
The analysis of self-stabilizing algorithms is often limited to the worst case stabilization time starting from an arbitrary state, i.e., a state resulting from a sequence of faults. Considering the fact that these algorithms are intended to provide fault tolerance in the long [...] Read more.
The analysis of self-stabilizing algorithms is often limited to the worst case stabilization time starting from an arbitrary state, i.e., a state resulting from a sequence of faults. Considering the fact that these algorithms are intended to provide fault tolerance in the long run, this is not the most relevant metric. A common situation is that a running system is an a legitimate state when hit by a single fault. This event has a much higher probability than multiple concurrent faults. Therefore, the worst case time to recover from a single fault is more relevant than the recovery time from a large number of faults. This paper presents techniques to derive upper bounds for the mean time to recover from a single fault for self-stabilizing algorithms based on Markov chains in combination with lumping. To illustrate the applicability of the techniques they are applied to a new self-stabilizing coloring algorithm. Full article
Show Figures

Figure 1

Back to TopTop