# A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

#### 2.1. Artificial Intelligence Computation

#### 2.2. Snapshot Protocols

_{1}, Q

_{2}∈ Q, Q

_{1}∩ Q

_{2}≠ ∅ [33,34]. Thus, a distributed snapshot protocol based on the quorum system is suitable for static systems. Since cloud computing systems are dynamic in nature, a protocol based on the quorum system requires the reorganization of quorum sets each time a node joins or departs.

## 3. System Model and Problem Definition

_{1}, node

_{2}, node

_{3}, ···, node

_{n}, where each node is connected by communication channels. There is no shared memory and, therefore, a node can communicate with others solely by passing messages. The message delivery model is asynchronous. When asynchronous send primitives are used, the control returns to the invoking process after the message is delivered to the buffer. Messages are delivered reliably with finite but arbitrary time delay. The network can be described as a directed graph [35], in which vertices represent the nodes and edges represent unidirectional communication channels. Let C

_{ij}denote the channel from node

_{i}to node

_{j}and SC

_{ij}denote the state of C

_{ij}.

_{ij}, send(m

_{ij}), and receive(m

_{ij}) be a message sent by node

_{i}to node

_{j}, a sending event of m

_{ij}, and a receiving event of m

_{ij}, respectively. The occurrence of these events leads to transitions in the global system state. At any instant, the state of node

_{i}(denoted by LS

_{i}) is a result of the entire sequence of events executed by node

_{i}up to the instant. For the channel C

_{ij}, the transit state is defined as follows [36]:

_{i}, LS

_{j}) = {m

_{ij}| send(m

_{ij}) ∈ LS

_{i}⋀ receive(m

_{ij}) ∉ LS

_{j}}.

_{i}and node

_{j}, it must include transit(LS

_{i}, LS

_{j}) and transit(LS

_{j}, LS

_{i}) as well as LS

_{i}and LS

_{j}. The communication model is not restricted to the FIFO or causally ordered delivery model [37]. Furthermore, unlike previous research, we do not use broadcast primitives, which simplify the design of a snapshot protocol. In our algorithmic design, we use one-to-one communication primitives. How to accomplish the snapshot protocol safely and efficiently with the assumption of the non-FIFO model and one-to-one communication model is at the core of our contributions by collecting a consistent global state GS:

_{i}LS

_{i}, ⋃

_{i,j}SC

_{ij}}.

_{ij}) ∈ LS

_{i}⇒ m

_{ij}∈ SC

_{ij}⊕ receive(m

_{ij}) ∈ LS

_{j},

_{ij}) ∉ LS

_{i}⇒ m

_{ij}∉ SC

_{ij}⋀ receive(m

_{ij}) ∉ LS

_{j}.

_{ij}that is recorded in LS

_{i}must be captured in the state of the channel C

_{ij}or in the collected local state of node

_{j}. Condition 2 states that if a message m

_{ij}is not recorded as a sent event in the local state of node

_{i}, then it must be presented neither in the state of the channel C

_{ij}nor in the collected local state of node

_{j}. The proposed snapshot protocol is able to capture a consistent global state satisfying the above conditions; the proof of the algorithm is detailed in the next section. For node failures, we consider the fail-stop model [39], where a failed node remains halted forever.

## 4. The Proposed Distributed Snapshot Protocol

#### 4.1. Details of the Algorithms

_{i}checks whether a consistent global state is collected for failedRound (lines 16–28). If the stateNodes data structure satisfies the conditions of the GS, node

_{i}saves the stateNodes data structure to latestSnapshot and builds the stateChannel data structure (lines 17–21). After setting the timestamp for the snapshot, node

_{i}performs the proposeGS function (lines 23–24). These procedures are performed until either continue is true or recordedRound is equal to currentRound (line 16).

_{i}updates its own local information before message exchange and performs the takeSnapshotLocal function (lines 33–35). Next, it selects a random neighbor node from its neighbor list and then sends LS

_{i}to the selected neighbor node (lines 36–38). Note that LS

_{i}includes the stateNodes data structure. If the result of the send function is true, the iteration is aborted (line 34). This guarantees that node

_{i}adheres to the exactly-once semantic for message exchange in a round.

Algorithm 1. The proposed distributed snapshot algorithm for node_{i} (sending) |

1: begin initialization |

2: sta- odes[r][j] ← null, ∀r ∈ {1 … max_{round}}, ∀j ∈ {1 … max_{node}}; |

3: stateChannel[r][from][to] ← null, ∀r ∈ {1 … max_{round}}, ∀from,to ∈ {1 … max_{node}}; |

4: neighborList[p] ← null, ∀p ∈ {1 … max_{neighbor}}; |

5: recordedRound ← 0; |

6: failedRound ← null; |

7: latestSnapshot ← null; |

8: currentRound ← 0; |

9: continue ← null; |

10: sended ← null; |

11: timestamp ← null; |

12: end |

13: begin before starting a round |

14: failedRound ← recordedRound + 1; |

15: continue ← true; |

16: while continue || recordedRound == currentRound do |

17: check stateNodes[failedRound][j], ∀j ∈ {1 … max_{node}}; |

18: if stateNodes[failedRound][j] satisfies the conditions of the GS do |

19: latestSnapshot ← stateNodes[failedRound][j]; |

20: recordedRound ← failedRound; |

21: build stateChannel[recordedRound][from][to]; |

22: failedRound ← failedRound + 1; |

23: timestamp ← getCurrentTimestamp(); |

24: proposeGS(i, r, timestamp); |

25: else |

26: continue ← false; |

27: end if |

28: end |

29: end |

30: begin at each round |

31: sended ← false; |

32: currentRound ← currentRound + 1; |

33: updateLocalInformation(); |

34: while sended is false do |

35: stateNodes[currentRound][i] ← takeSnapshotLocal(); |

36: random ← selectRandomNumber(1, max_{neighbor}); |

37: neighbor ← neighborList[random]; |

38: sended ← send(LS_{i}, neighbor); |

39: end |

40: end |

41: function updateLocalInformation(); |

42: stateNodes[currentRound][i] ← getlocalState(); |

43: stateNodes[currentRound][i].timestamp ← getCurrentTimestamp(); |

44: end |

_{i}and not vice versa. When the protocol is configured to pull mode, the passive thread does not receive the sending node’s stateNodes data structure; the receive function in Algorithm 2 (line 7) is not performed.

Algorithm 2. The proposed distributed snapshot algorithm for node_{i} (receiving) |

1: begin initialization |

2: roundStart ← null; |

3: neighbor ← null; |

4: end |

5: repeat |

6: neighbor ← waitForMessage(); |

7: neighbor.stateNodes ← receive(neighbor); |

8: updateStateNodes(); |

9: send(LS_{i}, neighbor); |

10: until forever; |

11: function updateStateNodes() |

12: roundStart ← recordedRound + 1; |

13: for each stateNodes[r][j], where roundStart < r < currentRound; |

14: if stateNodes[r][j].timestamp < neighbor. stateNodes[r][j].timestamp then |

15: stateNodes[r][j] ← neighbor. stateNodes[r][j]; |

16: stateNodes[r][j].timestamp ← neighbor. stateNodes[r][j].timestamp; |

17: end if |

18: end |

19: end |

^{2}) and that of the proposed distributed snapshot protocol is O(n), where n is the number of nodes in the system [12].

#### 4.2. Illustrative Example of the Protocol

_{i}’s information appearing in the sum of neighbor lists in the system is 20 since the peer sampling service is based on uniformity of randomness. In this regard, the probability that other nodes do not contact node

_{i}becomes extremely low as the round number increases. Hence, all the nodes’ information will be aggregated as the round progresses. In addition, our algorithmic design of the snapshot protocol does not rely on a central authority or super node. Hence, no single point of failure or performance bottleneck exists.

#### 4.3. Intervention of the Snapshot Protocol

#### 4.4. Proof of the Protocol

_{i}, r, t) ⇒ consistentState(r)],

_{i}, r, t) means that node

_{i}proposes a consistent global state for round r at time t and consistentState(r) indicates that there exists a consistent global state for the round r.

_{i}, r, t)].

_{i}, r, t)].

**Theorem**

**1.**

**Proof.**

_{i}be the node that proposes a consistent global state for a round r. Based on the specification of the proposed algorithm, node

_{i}waits for the stateNodes data structure to be aggregated before it proposes a consistent global state. Since node

_{i}is a correct node, it does not propose a consistent global state until the stateNodes data structure is aggregated for all the nodes in the system. After aggregating the stateNodes data structure for the round r, node

_{i}checks whether the collected states satisfy the consistent global state, that is, (1) send(m

_{ij}) ∈ LS

_{i}⇒ m

_{ij}∈ SC

_{ij}⊕ receive(m

_{ij}) ∈ LS

_{j}and (2) send(m

_{ij}) ∉ LS

_{i}⇒ m

_{ij}∉ SC

_{ij}⋀ receive(m

_{ij}) ∉ LS

_{j}. If Condition 1 and Condition 2 are met for the stateNodes data structure, node

_{i}performs the proposeGS function with a timestamp value. Otherwise, node

_{i}does not propose a consistent global state. In other words, node

_{i}proposes a consistent global state for the round r if, and only if, there exists a consistent global state for the round r. This is a contradiction. Hence, the proposed distributed snapshot protocol satisfies the correctness condition. ☐

**Theorem**

**2.**

**Proof.**

**Theorem**

**3.**

**Proof.**

**Basis:**There is one node in the system.

_{i}be the node in the system. Since there is one node in the system, node

_{i}performs the proposed algorithm in every round and updates its own local information. According to the specification of the algorithm, the updated local information of node

_{i}is stored in the stateNodes data structure. Since the size of the stateNodes data structure is 1, checking a consistent global state is trivial. In other words, in every round, node

_{i}updates its local information and proposes a consistent global state by checking Condition 1 and Condition 2. Because there is no message from the send() and receive() functions, the state is always consistent. Hence, the proposed distributed snapshot protocol satisfies the liveness condition when there is one node in the system.

**Induction step (1):**There are k nodes in the system.

**Induction step (2):**There are k + 1 nodes in the system.

_{k}

_{+1}be the (k + 1)th node in the system. Suppose k nodes’ stateNodes data structures are aggregated except node

_{k}

_{+1}. Since node

_{k}

_{+1}is a correct node, node

_{k}

_{+1}follows the specification of the proposed snapshot protocol. Therefore, node

_{k}

_{+1}updates its local information and saves it to stateNodes. The situations where a node proposes a consistent global state are twofold. One is when node

_{k}

_{+1}sends its stateNodes data structure to a neighbor node. In this case, the receiving neighbor can propose a consistent global state because the receiving neighbor’s stateNodes data structure is aggregated for all the nodes. At the same time, node

_{k}

_{+1}also can propose a consistent global state by retrieving the receiving node’s stateNodes data structure. The other one is when node

_{i}(node

_{i}≠ node

_{k}

_{+1}) selects node

_{k}

_{+1}as a neighbor to target and retrieves node

_{k}

_{+1}‘s stateNodes data structure. In this situation, both node

_{i}and node

_{k}

_{+1}can propose a consistent global state for the same reason. In short, the local information of node

_{k}

_{+1}will be disseminated to all other nodes in the system and, eventually, all of the nodes in the system can determine whether it is a consistent global state or not. Hence, the proposed distributed snapshot protocol satisfies the liveness condition when there are k + 1 nodes. ☐

## 5. Performance Evaluation

^{2}messages in one round. Table 2 details the cumulative number of messages in comparison with previous protocols when the number of nodes is 50,000. As the number of nodes increases, the gap between previous protocols and our protocol goes far beyond logarithmic scale. Furthermore, unlike the broadcast-based snapshot protocol, our approach maintains a small number of neighbors in the list. This signifies the efficiency of the proposed distributed snapshot protocol.

## 6. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Hassabis, D. Artificial intelligence: Chess match of the century. Nature
**2017**, 544, 413–414. [Google Scholar] [CrossRef] - Moravčík, M.; Schmid, M.; Burch, N.; Lisý, V.; Morrill, D.; Bard, N.; Davis, T.; Waugh, K.; Johanson, M.; Bowling, M. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science
**2017**, 356, 508–513. [Google Scholar] [CrossRef] [PubMed] - Cristea, D.S.; Moga, L.M.; Neculita, M.; Prentkovskis, O.; Md Nor, K.; Mardani, A. Operational shipping intelligence through distributed cloud computing. J. Bus. Econ. Manag.
**2017**, 18, 695–725. [Google Scholar] [CrossRef] - Chen, G.; Wang, E.; Sun, X.; Lu, Y. An intelligent approval system for city construction based on cloud computing and big data. Int. J. Grid High Perform. Comput.
**2016**, 8, 57–69. [Google Scholar] [CrossRef] - Grzonka, D.; Jakóbik, A.; Kołodziej, J.; Pllana, S. Using a multi-agent system and artificial intelligence for monitoring and improving the cloud performance and security. Futur. Gener. Comput. Syst.
**2017**, in press. [Google Scholar] [CrossRef] - Jula, A.; Sundararajan, E.; Othman, Z. Cloud computing service composition: A systematic literature review. Expert Syst. Appl.
**2014**, 41, 3809–3824. [Google Scholar] [CrossRef] - Khoobjou, E.; Mazinan, A.H. On hybrid intelligence-based control approach with its application to flexible robot system. Hum.-Centric Comput. Inf. Sci.
**2017**, 7, 5. [Google Scholar] [CrossRef] - Shi, B.; Li, B.; Cui, L.; Zhao, J.; Li, J. Syncsnap: Synchronized Live Memory Snapshots of Virtual Machine Networks. In Proceedings of the 16th IEEE International Conference on High Performance Computing and Communications, Paris, France, 20–22 August 2014; pp. 490–497. [Google Scholar]
- Han, S.; Shen, H.; Kim, T.; Krishnamurthy, A.; Anderson, T.; Wetherall, D. Metasync: Coordinating storage across multiple file synchronization services. IEEE Int. Comput.
**2016**, 20, 36–44. [Google Scholar] [CrossRef] - Qiang, W.; Jiang, C.; Ran, L.; Zou, D.; Jin, H. Cdmcr: Multi-level fault-tolerant system for distributed applications in cloud. Secur. Commun. Netw.
**2016**, 9, 2766–2778. [Google Scholar] [CrossRef] - He, J.; Wu, Y.; Fu, Y.; Zhou, W. Snapshot-based data index in cloud storage systems. In Proceedings of the 2016 IEEE Information Technology, Networking, Electronic and Automation Control Conference, Chongqing, China, 20–22 May 2016; pp. 784–788. [Google Scholar]
- Lim, J.; Suh, T.; Yu, H. Unstructured deadlock detection technique with scalability and complexity-efficiency in clouds. Int. J. Commun. Syst.
**2014**, 27, 852–870. [Google Scholar] [CrossRef] - Lim, J.; Chung, K.-S.; Gil, J.-M.; Suh, T.; Yu, H. An unstructured termination detection algorithm using gossip in cloud computing environments. In Proceedings of the 26th International Conference on Architecture of Computing Systems (ARCS 2013), Prague, Czech Republic, 19–22 February 2013; Kubátová, H., Hochberger, C., Daněk, M., Sick, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 1–12. [Google Scholar]
- Lim, J.; Chung, K.-S.; Chin, S.-H.; Yu, H.-C. A gossip-based mutual exclusion algorithm for cloud environments. In Proceedings of the 7th International Conference on Advances in Grid and Pervasive Computing, Hong Kong, China, 11–13 May 2012; Li, R., Cao, J., Bourgeois, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 31–45. [Google Scholar]
- Lim, J.; Suh, T.; Gil, J.; Yu, H. Scalable and leaderless byzantine consensus in cloud computing environments. Inf. Syst. Front.
**2014**, 16, 19–34. [Google Scholar] [CrossRef] - Kavakiotis, I.; Tsave, O.; Salifoglou, A.; Maglaveras, N.; Vlahavas, I.; Chouvarda, I. Machine learning and data mining methods in diabetes research. Comput. Struct. Biotechnol. J.
**2017**, 15, 104–116. [Google Scholar] [CrossRef] [PubMed] - Yu, N.; Yu, Z.; Gu, F.; Li, T.; Tian, X.; Pan, Y. Deep learning in genomic and medical image data analysis: Challenges and approaches. J. Inf. Process. Syst.
**2017**, 13, 204–214. [Google Scholar] - Zhuang, Y.-T.; Wu, F.; Chen, C.; Pan, Y.-H. Challenges and opportunities: From big data to knowledge in ai 2.0. Front. Inf. Technol. Electron. Eng.
**2017**, 18, 3–14. [Google Scholar] [CrossRef] - Makridakis, S. The forthcoming artificial intelligence (ai) revolution: Its impact on society and firms. Futures
**2017**, 90, 46–60. [Google Scholar] [CrossRef] - Maillo, J.; Ramírez, S.; Triguero, I.; Herrera, F. Knn-is: An iterative spark-based design of the k-nearest neighbors classifier for big data. Knowl.-Based Syst.
**2017**, 117, 3–15. [Google Scholar] [CrossRef] - Erb, B.; Meißner, D.; Habiger, G.; Pietron, J.; Kargl, F. Consistent retrospective snapshots in distributed event-sourced systems. In Proceedings of the 2017 International Conference on Networked Systems (NetSys), Gottingen, Germany, 13–16 March 2017; pp. 1–8. [Google Scholar]
- Zhang, Y.; Gao, Q.; Gao, L.; Wang, C. Maiter: An asynchronous graph processing framework for delta-based accumulative iterative computation. IEEE Trans. Parallel Distrib. Syst.
**2014**, 25, 2091–2100. [Google Scholar] [CrossRef] - Wang, Z.; Gao, L.; Gu, Y.; Bao, Y.; Yu, G. A fault-tolerant framework for asynchronous iterative computations in cloud environments. In Proceedings of the Seventh ACM Symposium on Cloud Computing, Santa Clara, CA, USA, 5–7 October 2016; pp. 71–83. [Google Scholar]
- Zhang, Y.; Liao, X.; Jin, H.; Gu, L.; Tan, G.; Zhou, B.B. Hotgraph: Efficient asynchronous processing for real-world graphs. IEEE Trans. Comput.
**2017**, 66, 799–809. [Google Scholar] [CrossRef] - Wang, Z.; Gu, Y.; Bao, Y.; Yu, G.; Gao, L. An i/o-efficient and adaptive fault-tolerant framework for distributed graph computations. Distrib. Parallel Databases
**2017**, 35, 177–196. [Google Scholar] [CrossRef] - Chandy, K.M.; Lamport, L. Distributed snapshots: Determining global states of distributed systems. ACM Trans. Comput. Syst.
**1985**, 3, 63–75. [Google Scholar] [CrossRef] - Egwutuoha, I.P.; Levy, D.; Selic, B.; Chen, S. A survey of fault tolerance mechanisms and checkpoint/restart implementations for high performance computing systems. J. Supercomput.
**2013**, 65, 1302–1326. [Google Scholar] [CrossRef] - Kim, Y.; Araragi, T.; Nakamura, J.; Masuzawa, T. A concurrent partial snapshot algorithm for large-scale and dynamic distributed systems. IEICE Trans. Inf. Syst.
**2014**, 97, 65–76. [Google Scholar] [CrossRef] - Rezaei, A.; Coviello, G.; Li, C.-H.; Chakradhar, S.; Mueller, F. Snapify: Capturing snapshots of offload applications on xeon phi manycore processors. In Proceedings of the 23rd International Symposium on High-Performance Parallel and Distributed Computing, Vancouver, BC, Canada, 23–27 June 2014; pp. 1–12. [Google Scholar]
- Cui, L.; Li, J.; Wo, T.; Li, B.; Yang, R.; Cao, Y.; Huai, J. Hotrestore: A fast restore system for virtual machine cluster. In Proceedings of the 28th Large Installation System Administration Conference (LISA14), Seattle, WA, USA, 9–14 November 2014; pp. 10–25. [Google Scholar]
- Özsu, M.T.; Valduriez, P. Distributed and parallel database systems. ACM Comput. Surv.
**1996**, 28, 125–128. [Google Scholar] [CrossRef] - Corbett, J.C.; Dean, J.; Epstein, M.; Fikes, A.; Frost, C.; Furman, J.J.; Ghemawat, S.; Gubarev, A.; Heiser, C.; Hochschild, P.; et al. Spanner: Google’s globally distributed database. ACM Trans. Comput. Syst.
**2013**, 31, 1–22. [Google Scholar] [CrossRef] - Ricart, G.; Agrawala, A.K. An optimal algorithm for mutual exclusion in computer networks. Commun. ACM
**1981**, 24, 9–17. [Google Scholar] [CrossRef] - Maekawa, M. A √n algorithm for mutual exclusion in decentralized systems. ACM Trans. Comput. Syst.
**1985**, 3, 145–159. [Google Scholar] [CrossRef] - Sriwanna, K.; Boongoen, T.; Iam-On, N. Graph clustering-based discretization of splitting and merging methods (graphs and graphm). Hum. Centric Comput. Inf. Sci.
**2017**, 7, 21. [Google Scholar] [CrossRef] - Helary, J.-M. Observing global states of asynchronous distributed applications. In Proceedings of the 3rd International Workshop on Distributed Algorithms, Nice, France, 26–28 September 1989; Bermond, J.-C., Raynal, M., Eds.; Springer: Berlin/Heidelberg, Germany, 1989; pp. 124–135. [Google Scholar]
- Birman, K.; Schiper, A.; Stephenson, P. Lightweight causal and atomic group multicast. ACM Trans. Comput. Syst.
**1991**, 9, 272–314. [Google Scholar] - Kshemkalyani, A.D.; Raynal, M.; Singhal, M. An introduction to snapshot algorithms in distributed computing. Distrib. Syst. Eng.
**1995**, 2, 224. [Google Scholar] [CrossRef] - Schneider, F.B. Byzantine generals in action: Implementing fail-stop processors. ACM Trans. Comput. Syst.
**1984**, 2, 145–154. [Google Scholar] [CrossRef] - Lim, J.; Chung, K.-S.; Lee, H.; Yim, K.; Yu, H. Byzantine-resilient dual gossip membership management in clouds. Soft Comput.
**2017**. [Google Scholar] [CrossRef] - Jelasity, M.; Guerraoui, R.; Kermarrec, A.-M.; Steen, M.V. The peer sampling service: Experimental evaluation of unstructured gossip-based implementations. In Proceedings of the 5th ACM/IFIP/USENIX International Conference on Middleware, Toronto, ON, Canada, 18–22 October 2004; Springer: New York, NY, USA, 2004; pp. 79–98. [Google Scholar]
- Allavena, A.; Demers, A.; Hopcroft, J.E. Correctness of a gossip based membership protocol. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Principles of Distributed Computing, Las Vegas, NV, USA, 17–20 July 2005; pp. 292–301. [Google Scholar]
- El abbadi, N. An efficient storage format for large sparse matrices based on quadtree. Int. J. Comput. Appl.
**2014**, 105, 25–30. [Google Scholar]

**Figure 1.**An illustrative example of the proposed snapshot protocol. (

**a**) Initial state; (

**b**) After updating local information; (

**c**) Message exchange; (

**d**) Nodes’ state after one round.

**Figure 3.**The number of aggregated nodes in logarithmic scale for Round 1. (

**a**) Normal mode; (

**b**) Piggyback mode.

**Figure 4.**The average number of aggregated nodes for Round 1. (

**a**) Normal mode with varying number of nodes; (

**b**) Piggyback mode with varying number of nodes; (

**c**) Normal mode with different size of neighbor list when the number of nodes is 50,000; (

**d**) Piggyback mode with different size of neighbor list when the number of nodes is 50,000.

**Figure 5.**The standard deviation of the number aggregated nodes for Round 1. (

**a**) Normal mode; (

**b**) Piggyback mode.

Parameter | Value |
---|---|

Number of nodes | 50, 500, 5000, 50,000 |

Protocol mode | normal, piggyback |

Size of neighbor list | 5, 10, 20, 40 |

Number of rounds | 10 |

**Table 2.**The cumulative number of messages and size of received data for a node in comparison with previous protocols when the number of nodes is 50,000.

Method | Category | Round 1 | Round 5 | Round 10 |
---|---|---|---|---|

Broadcast-based | Number of messages | 2,500,000,000 | 12,500,000,000 | 25,000,000,000 |

Size of received data | 41.5 MB | 207.9 MB | 415.8 MB | |

Proposed (normal) | Number of messages | 50,000 | 250,000 | 500,000 |

Size of received data | 872 B | 4.2 KB | 8.5 KB | |

Proposed (piggyback) | Number of messages | 50,000 | 250,000 | 500,000 |

Size of received data | 26.2 MB | 192.5 MB | 400.4 MB |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Lim, J.; Gil, J.-M.; Yu, H.
A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments. *Symmetry* **2018**, *10*, 30.
https://doi.org/10.3390/sym10010030

**AMA Style**

Lim J, Gil J-M, Yu H.
A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments. *Symmetry*. 2018; 10(1):30.
https://doi.org/10.3390/sym10010030

**Chicago/Turabian Style**

Lim, JongBeom, Joon-Min Gil, and HeonChang Yu.
2018. "A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments" *Symmetry* 10, no. 1: 30.
https://doi.org/10.3390/sym10010030