Next Article in Journal
Symmetry Breaking of Phospholipids
Previous Article in Journal
Extending Legacy Industrial Machines by a Low-Cost Easy-to-Use IoT Module for Data Acquisition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Space Habitat Data Centers—For Future Computing

Department of Electrical and Electronic Engineering Technology, Doornfontein Campus, University of Johannesburg, Johannesburg 2092, South Africa
*
Authors to whom correspondence should be addressed.
Symmetry 2020, 12(9), 1487; https://doi.org/10.3390/sym12091487
Submission received: 7 July 2020 / Revised: 7 August 2020 / Accepted: 14 August 2020 / Published: 10 September 2020

Abstract

:
Data from sensor-bearing satellites requires processing aboard terrestrial data centres that use water for cooling at the expense of high data-transfer latency. The reliance of terrestrial data centres on water increases their water footprint and limits the availability of water for other applications. Therefore, data centres with low data-transfer latency and reduced reliance on Earth’s water resources are required. This paper proposes space habitat data centres (SHDCs) with low latency data transfer and that use asteroid water to address these challenges. The paper investigates the feasibility of accessing asteroid water and the reduction in computing platform access latency. Results show that the mean asteroid water access period is 319.39 days. The use of SHDCs instead of non-space computing platforms reduces access latency and increases accessible computing resources by 11.9–33.6% and 46.7–77% on average, respectively.

1. Introduction

Terrestrial cloud data centres have high powering [1,2,3] and cooling (using water) [4,5] costs. These have prompted the design of solutions [6,7,8,9,10] that reduce power consumption and water footprint. The water footprint can also be reduced by leveraging free air cooling [11,12] to a limited extent. The ocean provides free water cooling and can host data centres [13,14,15], at the risk of degrading marine biodiversity.
Siting data centres in space can reduce the water footprint [16,17,18,19]. The approach in references [16,18] utilises small satellites to realize space-based data centres. Small satellites used as data centres have reduced uptime when faults occur because they are unmanned. It is also challenging to upgrade the computing payload in small satellites being used as data centres. Outer space also hosts satellites and spaceships [19,20,21]. Spaceships such as space habitats can support humans (making it easy to address faults) and host a larger computing payload. This paper proposes a manned space habitat data centre (SHDC) for processing space and non-space application data.
The discussion in this paper addresses the challenge of reducing the water footprint of data centres. The paper proposes that data centres be sited in outer space (Earth’s orbit) and make use of asteroid water for cooling to reduce Earth’s data centre water footprint. The proposed data centre is located in a space habitat data centre. The use of an SHDC (space-based) also enables low latency computing platform access by satellites with data requiring processing.
The contributions of the paper are further discussed below:
The paper proposes manned space habitat data centres (SHDCs) for data processing. In addition, the paper presents the architecture for the design of the proposed SHDC. This is done by describing the aspects related to the access and use of asteroid water for cooling the SHDC. In addition, the paper describes communications between SHDCs, computer-resource-constrained space assets, and the ground segment. The paper also describes SHDC design and computing entities enabling data processing and communications. In addition, the benefits of using the SHDC is formulated and investigated via performance simulation and evaluation.

2. Background

Avgerinou et al. [1] examine the power consumption in modern data centres. The discussion recognizes that the operation of data centres poses significant impact as regards increasing carbon dioxide emissions. In addition, data centre facilities comprise active and redundant powering and cooling systems. The power and cooling systems are recognized to consume a significant amount of power. The global power consumption of data centres increased by 19.7% over a five-year period spanning 2007 to 2012. The paper presents analysis of the power usage effectiveness of data centres for the period 2009–2016. The role of the data centre location in influencing the PUE is also examined. The presented results show that the lowest PUEs are obtained for data centres sited in Nordic countries (average PUE of 1.71) and northern/central Europe (average PUE of 1.72). The discussion by Avgerinou et al. [1] aims to determine the best practices that can be used to enhance data centre energy efficiency, i.e., ensure a low PUE figure is obtained and also reduce their carbon dioxide emissions. This is aimed at ensuring the environment friendliness of data centres to be deployed in the future. The focus of the discussion by Avgerinou et al. [1] is twofold. The first is determining the measures that can reduce the degrading effect of data centres on the environment. This is because they have identified other measures such as the use of energy from alternative and sustainable sources in future data centre operation. The second is determining best practices that can provide useful information for data centre operators enabling them to improve data centre facility energy efficiency. A similar perspective but with focus on analysing the PUE trends is also examined by Hintemann et al. [2].
Though the use of free cooling is recognized as suitable, its use here is limited to the terrestrial environment. However, the use of free cooling with relation to data centres applies to other domains such as the underwater environment. The suitability of the underwater environment has received research consideration in several studies by Periola [13], Periola et al. [14] and Cutler et al. [15].
The need to reduce the water footprint is considered by Wang et al. [3]. Water usage effectiveness (WUE) is recognized as a metric that is important to data centre operation. The challenge being addressed in Wang et al. [3] is the optimization of water usage and power consumption for a data centre coupled with a micro-grid system. The discussion presents a scheduling mechanism aiming to reduce the costs and fees associated with energy payment and water usage. The joint optimization i.e., reducing water and electricity costs is the novelty of the contribution in Wang et al. [3]. This is done with the aim of determining the combination of water and electricity usage that results in the most profitable operation of the data centre facility. The analysis of results in Wang et al. [3] recognizes that the ideal WUE figure is zero. However, a figure of WUE is assumed to be infeasible. This is because the focus of the analysis by Wang et al. [3] is the optimization of terrestrial data centre systems. A condition that can make the WUE ideally zero is the use of a cooling system that is not reliant on Earth’s water resources. However, the conduct of research to determine the existence and feasibility of cooling systems with a zero WUE was not considered in Wang et al. [3]. Instead, the focus is on determining the optimal water and power usage for a given data centre with different computing workload sizes.
Zhang et al. [11] recognize the potential and role of free cooling technologies in enhancing data centre power usage effectiveness. Different free cooling technology strategies such as direct fresh air, rotating wheel heat exchanger, heat pipe back rack and water driven free cooling system are recognized. The use of air free cooling and heat pipe back rack approach is recognized to have low power consumption; while the use of rotating wheel heat exchanger and water free cooling approach has high power consumption. The required use of pump in the case of water free cooling approach contributes to the high costs. However, the discussion in Zhang et al. [11] aims to identify free cooling technologies suitable for reducing data centre operational costs. Nevertheless, it is important to note that the use of water free cooling technologies is suitable for situations with a high cooling load. However, additional and required evaluation of the environmental effects of using water as a free cooling medium for data centres does not receive consideration.
The evaluated and examined literature focuses on the design of different free cooling strategies with a focus on terrestrial data centres. However, data centres can also benefit from free cooling in other locations such as the stratosphere [13] and in underwater environments [13,14]. In addition, data centres are not used in isolation but constitute important entities in cloud radio access networks as seen by Periola [13]. Therefore, it is important that data centres are sited close to subscribers to achieve low latency content access. However, previous focus has been on enhancing the PUE and reducing the WUE. Preferential locations for siting data centres are naturally cold-climate locations such as Scandinavia, North America, and Europe. This is because countries in these locations have a natural low environmental temperature. However, this is not so for subscribers in locations such as the African continent.
Therefore, subscribers in Africa seeking to access content from data centres located in distant European locations experience high content access latency. The use of underwater data centres is recognized in references [13,15] to be capable of reducing the latency for content access with regard to coastal subscribers. The siting of data centres in the ocean is different from that of using terrestrial data centres.
The use of underwater data centres poses long-term questions and concerns such as scalability and long-term environment friendliness and sustainability. For example, the suitability of the ocean will attract several data centre operators. This increases the competition for the use of suitable ocean zones and locations. Therefore, it is important to design suitable mechanisms enabling an equitable sharing of the ocean’s resources for several potential data centre operators in the future. The concern of long-term environment friendliness and sustainability arises because of the increase in ocean temperature due to the presence of multiple underwater data centres. This increase in ocean temperature poses an existential threat to life under the ocean. These challenges constitute a limitation on the large-scale use of underwater data centres for future computing and networking applications.
In addition, the discussion has not considered the processing of the increasing amount of data arising from deployed in–orbit satellites. This is considered in Wang et al. [19], who propose a space edge computing paradigm comprising multiple satellites. The satellites can acquire, store, process and forward data from end user stations. Examples of considered end user stations are vehicles, airplanes, ships, buoys and sensors. The satellites can virtualize on–board resources to form a resource pool. The motivation for using the proposed space edge computing system is to improve the quality of service for a space-based internet of things system. This results in lower latency and improved coverage of terrestrial–satellite communications. The satellites in the proposed edge computing mechanism are reconfigurable and software defined satellites. A challenge that is not considered in Wang et al. [19] arises with the sufficiency of computing resources aboard intelligent satellites that are nodes in the space edge computing network. For example, the size of the space-based edge-computing platform is not noted to increase with the number of launched and in-orbit satellites. An increase in the number of launched satellites can overwhelm the computing capacity of the computing nodes in the space edge computing network.
The use of multiple edge computing platforms with increased computing resources is suitable and beneficial to address such a challenge. However, the use of multiple space-based edge-computing platforms has the potential drawback of increased overhead arising due to the need to execute additional and required potential network coordination. The use of space-based data centres with increased computational resources is suitable in this regard. In addition, the siting of data centres in space is beneficial because of the low dependence on Earth’s water resources, thereby enabling the realization of zero WUE.
There has been an increasing maturation of space technology leading to more emerging applications. This has led to increased activity in space exploration and colonization via space vehicles and space habitats.
Space technologies find applications in planetary science and space colonization. Space colonization has received attention from private organizations interested in commercializing space. Yakolev [22] recognizes that the use of space houses (realizable via space habitats) is required for Mars exploration.
Smitherman et al. [23] recognize the role of space habitats in missions such as asteroid retrieval, access and deep space missions. Space vehicles also have varying capabilities depending on crew size and scientific payload. Space habitat design concepts are derivable from international space station and space launch system design perspectives. Concepts from these perspectives ultimately aim to provide a comfortable living space in orbit. The suitability of space habitats for deep space exploration is considered in Smitherman et al. [23]. The use of space habitat is proposed for asteroid retrieval, lunar and Martian vehicle servicing, and deep-space operation missions. The discussion here does not consider the suitability of space habitats or other space technologies to enable the execution of computing-related applications in space. The potential of space habitat technologies for computing applications has not been considered.
Griffin et al. [24] identify the knowledge gaps required for space habitat design. The presence of an autonomous communication system aboard space habitats is deemed to be of medium priority. However, the discussion in Griffin et al. [24] does not present space habitat network architecture. Instead, the focus of Griffin et al. [24] is on the launch and radiation aspects of space vehicle design. The study by Smitherman et al. [25] describes how habitat modules can be aggregated in realizing the deep space gateway. The large volume enclosed in the space habitat is used as a research facility to support deep space science related research and technology development. These applications focus on realizing space mining. However, other possible applications are not considered.
Kalam [26] discusses the application of space habitats in Mars exploration aiming at addressing the Earth-based challenge of energy security via lunar solar energy conversion. Lia et al. [27] propose reusing space habitat technologies to realize solutions that address Earth’s challenges. This is suitable because space stations are designed to operate in a harsh environment. The discussion in this regard shows that the advances in space technology enables the realization of solutions that play a significant role in developing novel and sustainable solutions that address Earth-based challenges.
Space applications aim to leverage on space for human benefit [28,29,30]. The emergence of several mega-constellation projects such as SpaceX’s satellite mega-constellation, OneWeb satellite mega-constellation, Telesat mega-constellation, and the Amazon Kuiper mega-constellation are recognized applications that exploit space for wireless communications is recognized by Morad et al. [29].
The applications in references [28,29,30] exploit space technologies to realize space commercialization [31,32,33,34] and space continental initiatives [35]. Davis [32] recognizes increasing private sector role in space commercialization. The discussion in Davis [32] identifies and addresses regulatory space commercialization challenges. Current efforts in space commercialization largely target communications and Earth observations [29,36].
Shammas et al. [36] examine space commercialization from the perspective of hosting more applications besides Earth observations and communications. In addition, Shammas et al. [36] note the increasing private sector in space supply provision and space mining. The applications in Shammas et al. [36] exclude scientific experiments that focus on deriving knowledge from space related activities [37,38].
The increasing deployment of terrestrial cloud platforms is expected to process space data and support access to multi-media content and cloud services for wireless subscribers. This requires installing more high-water footprint terrestrial data centres. A high-water footprint reduces water availability for other applications. Data centres can be sited in locations like the stratosphere to reduce the water footprint. However, the siting of data centres in the stratosphere poses interference risks to radio astronomy. Such interference is similar to that posed by mega-constellation satellite networks [38,39,40,41]. The realization of a space habitat computing platform does not pose interference risks to radio astronomy.
Adams [42] considers leveraging on the outer space environment aided by a nitrogen-filled pod for realizing cooling and eliminating explosion hazards. The use of nitrogen has high costs. Moreover, the production of the nitrogen in Adams [42] requires using Earth’s resources.
The use of computing platforms also enhances satellite data processing. Intelligent satellites are suitable as space edge-computing nodes [43,44] that reduce satellite to computing platform transmission latency. Wang et al. [19] describes capabilities enabling the use of satellites as edge nodes. The intelligent satellite is used in edge computing to realize low latency data processing in satellite internet of things. The shift to space of edge nodes reduces the long-term reliance on Earth’s resources. However, this perspective is not considered by Wang et al. [19].
Lai et al. [43] propose a novel network that incorporates edge computing in geostationary networks. The space edge node is a geostationary satellite. Denby et al. [44] propose the orbital edge-computing paradigm. Orbital edge computing proposes collocating sophisticated processing hardware along with sensors in small satellites. The discussion in Denby et al. [44] differentiates between cloud computing and edge nodes and recognizes that cloud platforms require backhaul network access. The discussion in reference [42,43,44] notes that siting data centres in space is appealing to computing. Research in reference [16,19,43,44] describes strategies explaining how edge computing enhances satellite network applications.
An alternative approach to accessing computing resources by space assets is presented by Straub et al. [45]. Straub et al. [45] propose using space vehicle’s idle computing resources by satellites for data processing. Space vehicles with idle computing resources constitute the space computing platform. A computing bottleneck occurs when space vehicles do not have sufficient resources to host external applications. The use of small satellite edge nodes can be used to resolve this bottleneck challenge.
However, Denby et al. [44] point out that the availability of computing platforms can enhance data processing. Therefore, an absence of in-orbit computing platforms degrades space data processing. This can be addressed by increasing space segment computing resources. Therefore, space data processing via space computing platforms reduces water footprint, computing platform access latency, and enhances accessible space segment computing resources.
Water is suitable for data centre free cooling at the risk of increased water insecurity. It is also important to reduce data transfer latency between small-satellite and computing platforms. The perspective in this paper is that advances in space technology can be used to address these challenges.

3. Problem Description

The paper considers data centre operators seeking to process space data and site data centres near water sources. Let α be the set of terrestrial data centre operators.
α = { α 1 , α 2 , , α A }
The set of terrestrial data centres deployed by the a t h operator, α a , α a   ϵ   α is given by
α a = { α a 1 , α a 2 , ,   α a I }
The cooling indicator of the i t h Earth-based terrestrial data centre from the a t h operator α a i ,   α a i   ϵ   α a is denoted I ( α a i )   ϵ   { 0 , 1 } . The indicator values I ( α a i ) = 0 and I ( α a i ) = 1 signifies that data centre α a i is not and is water cooled respectively. In addition, let β ( α a i ,   t y ) ,   t y ϵ   t ; t = { t 1 ,   t 2 , ,   t Y } be the water footprint of the Earth-based terrestrial data centre α a i at the epoch t y . Furthermore, let γ be the set of ground locations of terrestrial data centres.
γ = { γ 1 , γ 2 , ,   γ B }
The location indicator of data centre α a i at location γ b ,   γ b   ϵ   γ at epoch t y is denoted I ( α a i , γ b , t y ) ϵ   { 0 , 1 } . The data centre α a i is not located and is located at γ b at epoch t y when I ( α a i , γ b , t y ) = 0 and I ( α a i , γ b , t y ) = 1 respectively. In the discussion here, terrestrial data centres utilise Earth’s water resources. The Earth’s water resource being used is at a temperature that is suitable for data centre cooling. The use of terrestrial data centres poses a challenge to water security when
a = 1 A i = 1 I b = 1 B y = 1 Y I ( α a i ) I ( α a i , γ b , t y ) β ( α a i ,   t y )     b = 1 B w ( γ b )
where w ( γ b ) is the amount of water resources available at the b t h Earth-based terrestrial location γ b .
Let ϕ be the set of applications requiring access to water resources such that
ϕ = { ϕ 1 , ϕ 2 , , ϕ C }
In addition, let w ( ϕ c , γ b ) ;   ϕ c   ϵ   ϕ be the water footprint of the c t h application ϕ c at location γ b . The demand for water by other application gives rise to water access challenges given the condition
( a = 1 A i = 1 I b = 1 B y = 1 Y I ( α a i ) I ( α a i , γ b , t y ) β ( α a i ,   t y ) ) + ( c = 1 C b = 1 B w ( ϕ c , γ b ) ) b = 1 B w ( γ b ) .
The relation in (6) holds true under different conditions such as
( a = 1 A i = 1 I b = 1 B y = 1 Y I ( α a i ) I ( α a i , γ b , t y ) β ( α a i ,   t y ) ) > ( c = 1 C b = 1 B w ( ϕ c , γ b ) )
( a = 1 A i = 1 I b = 1 B y = 1 Y I ( α a i ) I ( α a i , γ b , t y ) β ( α a i ,   t y ) ) ( c = 1 C b = 1 B w ( ϕ c , γ b ) )
If (6) and (7) holds true, the data centre water footprint is less than that of other applications. The data centre water demand overwhelms water supply in the considered Earth locations and is described as case C1. The case where (6) and (8) holds true is one in which data centre water footprint is roughly equal to that of other applications and is described as case C2. The water demand of data centres and existing applications jointly overwhelms the water supply source in cases C1 and C2. The discussion here reduces data centre water footprint.
Let Ϛ and ϑ be the set of small satellites and space vehicles requiring data processing, respectively.
Ϛ = { Ϛ 1 , Ϛ 2 , Ϛ 3 , , Ϛ D }
ϑ = { ϑ 1 , ϑ 2 , ϑ 3 , , ϑ E }
Let C I ( Ϛ d ,   t y ) and C I ( ϑ e ,   t y ) ,   ϑ e   ϵ   ϑ be the amount of idle computing resources on-board the d t h small satellite Ϛ d and the e t h space vehicle ϑ e at epoch t y respectively. In our consideration, a small satellite entity can host multiple applications. This has not been considered in (9). Instead, the relation in (9) aims to provide a mathematical formulation for the set of small satellites and not the set of applications that are being hosted by a given small satellite entity. For example, the small satellite Ϛ d hosting multiple applications can be described as
Ϛ d = { Ϛ d 1 ,   Ϛ d 2 , , Ϛ d F }
The computing resources required to execute the f t h application Ϛ d f ,   Ϛ d f   ϵ   Ϛ d at epoch t y is denoted C 1 ( Ϛ d f ,   t y ) . Small satellites expected to process data have a challenge accessing computing resources when
C 3 :   d = 1 D f = 1 F y = 1 Y C 1 ( Ϛ d f ,   t y ) d = 1 D y = 1 Y C I ( Ϛ d ,   t y )
C 4 :   d = 1 D f = 1 F y = 1 Y C 1 ( Ϛ d f ,   t y )   e = 1 E y = 1 Y C I ( ϑ e ,   t y )
C 5 :   d = 1 D f = 1 F C 1 ( Ϛ d f ,   t y ) < d = 1 D f = 1 F C 1 ( Ϛ d f ,   t y + 1 ) < d = 1 D f = 1 F C 1 ( Ϛ d f ,   t y + 2 )
The conditions in C1–C2 describe challenges associated with reducing terrestrial data centres reliance on Earth’s water resources. The condition in C3, C4, and C5 describes challenges associated with accessing computing resources for processing small satellite data.

4. Proposed Solution

This section presents the solutions proposed to address the identified challenges. It has three parts. The first, second and third presents SHDCs network architecture, SHDC computing resource access (C3–C5) and asteroid water access in the SHDC (C1–C2), respectively.

4.1. SHDC Network Architecture

The proposed solution reduces data centre demand on Earth’s water resources. Instead of realizing the data centre by integrating the storage and computing capabilities aboard small satellites, the use of a manned SHDC is proposed. The use of small satellites and space vehicles in a distributed architecture is suited for realizing edge nodes. However, it is challenging to upgrade computing payload after launch. Manned SHDCs support the upgrade of data centre computing payload or replace of damaged payload components.
The proposed SHDC comprises the cooling system (CLS), the server and computing system (SCS), and the communication system (CCS). The CCS enables communications with space edge-computing nodes, other SHDCs, and ground stations. The CLS’s coolant is asteroid water. This is feasible as meteorites and asteroids host water reservoirs [46,47]. The relation between the CLS, SCS, CCS, and the Earth segment is in Figure 1. The SCS receives workload from the terrestrial ground station gateway (TSGW) via the CCS. In Figure 1, the SHDC in low Earth orbit (LEO) executes the workload received from space and terrestrial assets.

4.2. Proposed Solution—SHDC Access to Computing Resources

Inter-communication between SHDCs becomes necessary when SHDCs are in the range of space vehicles or other SHDCs have insufficient computing resources as described in C3–C5. This can be addressed by increasing SHDC computing capability or launching more SHDCs. Let θ be the set of SHDCs.
θ = { θ 1 , θ 2 , θ 3 , , θ J }
The computing capability and idle computing resources of the j t h SHDC, θ j ,   θ j   ϵ   θ is denoted C 1 ( θ j ) and C I ( θ j   , t y ) , respectively. The mean of the idle computing resources is determined and shared with other SHDCs. The mean idle computing resource available on each SHDC is determined over a given duration. The ( j + 1 ) t h SHDC θ j + 1 ,   θ j + 1   ϵ   θ is most suitable for data processing if
max ( y = 1 Y C I ( θ 1 , t y ) ,   , y = 1 Y C I ( θ j   , t y ) , y = 1 Y C I ( θ j + 1 , t y ) ) = y = 1 Y C I ( θ j + 1 , t y )
The process of accessing computing resources in an SHDC network is shown in Figure 2. If the SHDC does not have sufficient computing resources, the workload is fragmented and processed in a distributed manner in a SHDC network.
The SHDC intended for use in Figure 2 is hosted in an international computing space station with provider-specific interfaces (PSIs). In addition, the SHDC is located in the low Earth orbit. However, it is sited at an altitude that is higher than that of the targeted small satellite networks being considered. The PSIs enable computing platform service providers to host data centres in the space habitat. The CLS, SCS, and CCS are present in each SHDC. PSIs are attached to the international computing space station via space computing nodes (SCNs). The role of PSIs, SCNs, and computing and algorithm execution entities is shown in Figure 3. Each data centre comprises a PSI through which subscribers’ access computing services via the CCS. Figure 3 has 8 PSIs. The concerned PSIs is given as PSI x ,   x = { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 } and are attached to the subsystem monitoring chambers via the SCNs. SCNs host sensors and enable data transfer from the PSI to the subsystem monitoring chambers.
The architecture also shows sub-components supporting SHDC functioning. These are the living space for data centre engineers, store, and subsystem monitoring chambers. The subsystem monitoring chamber hosts components that monitor data centre performance and water availability for SHDC cooling. The store hosts the spare sub-systems and devices. It is allocated to different organizations and can be replenished by trips to the SHDC. Each PSI communicates with ground-based entities and satellites via the CCS. The SHDC’s computing payload is operated by the engineers in the living space. The engineers execute maintenance procedures in a manner similar to terrestrial data centres.

4.3. SHDC—Supplying Asteroid Water

The SHDC requires access to asteroid water for cooling. The supply of asteroid water involves three entities i.e., space water reservoir entity (SWRE), asteroid water mining entity (AWME), and the computing platform service provider. The SWRE and AWME can also supply water to other outer space applications that require access to water and products that can be directly derived from water. In the consideration here, the SWRE is located in a low temperature location in Earth orbit but at a significantly higher altitude than that of the SHDC. The SWRE is far from the sun because the maximum low Earth orbit altitude is significantly lower than the Sun–Earth distance. This large separating distance implies that heat from the sun does not significantly increase the asteroid water temperature. In addition, the SWRE enables the execution of other space applications. Examples of such are space applications requiring the provision of hydrogen fuel for space-based applications as identified by Molag et al. [48]. The SWRE receives information on SHDC location and stores and supplies asteroid water to the SHDC via water supply vehicles. The water supply vehicles ensure the supply of low-temperature asteroid water (accessed from the SWRE) to a given SHDC. They incorporate cooling systems to maintain asteroid water at low temperature. In addition, the use of a water supply vehicle reduces the asteroid water supply delay in comparison to the case where asteroid water is mined and directly supplied to the SHDC. Relations between the AWME, SWRE, and SHDC are shown in Figure 4. The flowchart showing SWRE and SWR functionality is shown in Figure 5.

5. Performance Formulation

The performance model assumes that small satellites have limited computing capability necessitating accessing additional computing resources. In existing work [19,44], the data requiring processing is transmitted to the terrestrial cloud computing platform. The formulated metrics are the computing platform access latency and the accessible computing resources in the space segment. This section has two parts. The first and second parts formulate cloud platform access latency and accessible computing resources in the space segment respectively.

5.1. Computing Platform Access Latency

This section develops the performance model describing the access latency associated with using the computing resources aboard computing platforms, i.e., data centres. The formulation is different from that associated with using the space edge computing platform found in Wang et al. [19]. In the formulation here, data centres are considered to be suitable as seen in [44] and not edge-computing nodes (existing in Wang et al. [19]) are considered. The formulation extends the consideration of space edge computing nodes to that of accessing computing resources aboard data centres located in the stratosphere and in space (SHDC). The consideration of the SHDC here extends [44] where the suitability of a space-based data centre is recognized but without any mathematical formulation. The novelty of the formulation in comparison to existing work in Wang et al. [19] is that it considers that satellites process their data on data centres instead of space edge computing platforms. However, it is recognized by Denby et al. [44] that the use of space-based computing platforms is beneficial. This is because data centres host a significant number of servers and have more computing resources in comparison to the space edge-computing node.
The small satellite Ϛ d can access computing resources from platforms sited in different locations i.e., terrestrial, stratosphere or space (i.e, SHDC). Let 𝒻 be the set of stratosphere-based computing platforms:
𝒻 = { 𝒻 1 , 𝒻 2 , , 𝒻 V }
The altitude of the small satellite Ϛ d , SHDC θ j , space vehicle, ϑ e stratosphere-based computing platform 𝒻 z , 𝒻 z   ϵ   𝒻 are h ( Ϛ d ) ,   h ( θ j ) ,   h ( ϑ e ) and h ( 𝒻 z ) respectively. In addition, let T h ( Ϛ d , q , t y ) ,   q   ϵ   { θ j , 𝒻 z ,   ϑ e , α a i } denote the link speed between the small satellite and computing platform entity q at the epoch t y . The amount of data from small satellite Ϛ d requiring access to computing resource aboard the computing platform entity q is denoted D ( Ϛ d , t y ) . The computing platform access latency for stratosphere computing platforms, Γ 1 is given as
Γ 1 = y = 1 Y z = 1 V d = 1 D ( D ( Ϛ d , t y ) T h ( Ϛ d   , 𝒻 z , t y ) + ( h ( Ϛ d ) h ( 𝒻 z ) ) 3 × 10 8 )
It is also feasible that the small satellite accesses a stratosphere computing platform via other stratosphere computing platforms through inter-platform links. This is necessary when small satellite has transmit power constraints. The compute resource (or platform) access latency Γ 1 is given as
Γ 1 = y = 1 Y u = 1 V d = 1 D ( D ( Ϛ d , t y ) T h ( Ϛ d , 𝒻 u , t y ) + D ( 𝒻 u , 𝒻 u + x , t y ) T h ( 𝒻 u , 𝒻 u + x , t y ) + ( h ( Ϛ d ) h ( 𝒻 z ) ) 3 × 10 8 + | h ( 𝒻 u ) h ( 𝒻 u + x ) | 3 × 10 8 ) ,
𝒻 u   ϵ   𝒻 ,   𝒻 u + x   ϵ   𝒻 ,   𝒻 u   ,   𝒻 u + x .   D ( 𝒻 u , 𝒻 u + x , t y ) and T h ( 𝒻 u , 𝒻 u + x , t y ) are the size of data transmitted and inter-platform link speed between the u th and ( u + x ) th stratosphere computing platforms at the epoch t y respectively. h ( 𝒻 u ) and h ( 𝒻 u + x ) are the altitude of the u th and ( u + x ) th stratosphere computing platforms respectively.
The compute resource access latency for the terrestrial cloud computing platforms Γ 2 is given as
Γ 2 = y = 1 Y i = 1 I d = 1 D ( D ( Ϛ d , t y ) T h ( Ϛ d   , α a i , t y ) + ( h ( Ϛ d ) ) 3 × 10 8 ) .
It is also feasible that the small satellite accesses the terrestrial cloud computing platform via high-altitude platform (HAP). This becomes necessary when there is transmit power limitation aboard small satellites. The compute resource access delay, in this case, is denoted Γ 2 and given as
Γ 2 = y = 1 Y a = 1 A d = 1 D   z = 1 Z ( ( D ( Ϛ d , 𝒻 z , t y ) T h ( Ϛ d , 𝒻 z , t y ) + D ( 𝒻 z , α a i , t y ) T h ( 𝒻 z , α a i , t y ) ) ) + ( ( h ( Ϛ d ) ) 3 × 10 8 ) .
T h ( 𝒻 z , α a i , t y ) is the link throughput between the stratosphere cloud platform 𝒻 z and the i th terrestrial cloud platform of the a t h operator.
D ( Ϛ d , 𝒻 z , t y ) and D ( 𝒻 z , α a i , t y ) are the size of data transmitted from the small satellite to the HAP and from the HAP to the terrestrial data centre respectively.
The parameters Γ 1 ,   Γ 1 ,   Γ 2 and Γ 2 describe the latency associated with computing resource access latency in the context of existing work. Terrestrial cloud computing platform access is supported in references [19,44] when small satellites have compute resource constraints. In the proposed scheme, small satellites access idle computing resources on space vehicles and the SHDC. Let Γ 3 and Γ 4 denote the compute resource access delay for the space vehicle and the SHDC.
Γ 3 = y = 1 Y e = 1 E d = 1 D ( D ( Ϛ d , t y ) T h ( Ϛ d   , ϑ e , t y ) + | h ( Ϛ d ) h ( ϑ e ) | 3 × 10 8 ) ,
Γ 4 = y = 1 Y j = 1 J d = 1 D ( D ( Ϛ d , t y ) T h ( Ϛ d , θ j , t y ) + | h ( Ϛ d ) h ( θ j ) | 3 × 10 8 ) .
The scenarios in Γ 3 and Γ 4 do not consider the context where data is forwarded from the space vehicle to the SHDC. This is feasible when a new SHDC is deployed and it is necessary to transmit data from the space vehicle to the SHDC. In this case, the compute resource access latency is denoted Γ 4 and given as
Γ 4 = d = 1 D e = 1 E j = 1 J y = 1 Y ( D ( Ϛ d , t y ) T h ( Ϛ d , ϑ e , t y ) ) + ( D ( ϑ e , θ j , t y ) T h ( ϑ e , θ j , t y ) ) + ( | h ( Ϛ d ) h ( ϑ e ) | 3 × 10 8 ) + ( | h ( θ j ) h ( ϑ e ) | 3 × 10 8 ) .

5.2. Accessible Computing Resources in Space Segment

The use of SHDCs increases accessible space segment computing resources. In an existing mechanism [19], the small satellite’s computing resources are used. In the case of orbital edge computing, the accessible computing resources is denoted C r a 1 and given as
C r a 1 = y = 1 Y ( ( d = 1 D C I ( Ϛ d , t y ) ) + ( g = 1 G C I ( Ϛ g , t y ) ) ) .
C I ( Ϛ g ,   t y ) is the amount of idle computing resources aboard other LEO satellites used for applications besides orbital edge computing.
The small satellites used in orbital edge computing can also utilise the idle computing resources aboard satellites used in other applications. The total amount of accessible computing resources given that small satellites and space vehicles provide computing resources is denoted C r a 2 and given as
C r a 2 = y = 1 Y ( ( d = 1 D C I ( Ϛ d , t y ) ) + ( g = 1 G C I ( Ϛ g , t y ) ) + ( d = 1 D C I ( ϑ e , t y ) ) ) .
In the event that space vehicles and SHDCs provide access to computing resources, the total amount of accessible computing resources is denoted C r a 3 and given as
C r a 3 = y = 1 Y ( ( j = 1 J C I ( θ j , t y ) ) + ( d = 1 D C I ( ϑ e , t y ) ) ) .

6. Feasibility and Performance Evaluation

This section discusses feasibility, evaluation results and has two parts. The first and second part presents results on feasibility of accessing asteroid water for SHDC cooling and analysis of performance benefits in the quality of service (QoS), respectively.
The analysis in this section has two goals. The first goal is to investigate the feasibility of accessing asteroid water to realize zero WUE SHDC operation. This is done with the aim of identifying suitable asteroids, their near-Earth-orbit dates, and determines the maximum, minimum and mean duration of accessing asteroid water resources. The second goal is to examine how the use of SHDC enhances the quality of service from the satellite space computing perspective.

6.1. Feasibility of Accessing Asteroid Water Resources

This section presents asteroids whose water can be used for cooling SHDCs. The analysis conducted also examines the feasibility of accessing water from near-Earth-orbiting asteroids. The analysis to be conducted in this case requires accessing data on water bearing asteroids. This investigation is important and should be conducted to identify asteroids that have water which can be accessed and used to realize cooling in SHDCs. The conduct of analysis enabling the identification of these asteroids makes it feasible to achieve the goal of realizing a zero WUE. The data to be used in this regard is that of water bearing asteroid data which is obtained from the Asterank database [49], analysed, and presented in Table 1. Analysis examines the feasibility of mining water from asteroids for a 20-year period as shown in Table 2. The maximum, minimum and mean access intervals are 690 days and 81 days and 319.39 days, respectively. In Table 2, the access interval is in the format [ x ;   y ] for dates [ a ,   b ,   c ] . x and y are the number of days between dates a and b and dates b and c , respectively.
Therefore, the data analysis shows that the operation of SHDCs while achieving a zero WUE due to non-reliance on Earth’s water resources is achievable and feasible. In addition, asteroid water access enabling the realization of zero WUE SHDC operation can be expected at least 81 days and by at most 690 days after water is accessed from a given asteroid. In a case where multiple asteroids fall within the near-Earth-orbit window, having access to asteroid water after a period of 319.39 days is feasible. The results on the length of days where asteroid water can be accessed for zero WUE SHDC operation is obtained from the analysis of results presented in Table 2.
The results of analysis that is presented in Table 1 aim to identify asteroids that have water suitable for use in cooling SHDCs. In addition, the near-Earth approaches dates and the feasible number of days for which asteroids (with water) can be approached is also presented. Asteroids are accessed with the aim of ensuring that SHDC operation in the absence of earth’s water resources is feasible.

6.2. Performance Evaluation and Benefits—Computing-Resource-Related QoS

The performance evaluation also examines how the use of the proposed SHDCs enhances the quality of service (QoS) for satellite-computing-based applications. The analysis presented in this section is important to demonstrate that the use of the proposed SHDC enhances computing related QoS in the context of space=based computing. In the existing context, i.e., Wang et al. [19], where space edge computing paradigm is used, recourse to terrestrial computing platforms becomes necessary when computing resources aboard space edge computing platforms are exhausted. The existing scenario being considered in this simulation is one in which the use of computing resources aboard space edge computing platforms is infeasible. This arises when all computing resources aboard space edge computing platforms are currently being used for processing data obtained from deployed and in-orbit satellites. Two performance metrics have been examined. These are the computing resource access latency and the amount of accessible computing resources.
The parameters used to investigate the compute resource access latency and the associated performance benefits are investigated using the parameters shown in Table 3. In this regard, the simulation discusses the results of performance evaluation from the perspective of accessing the entity hosting the computing resources. In Table 3, the link speed is of the order of gigabits per second and megabits per second. This range of values is considered feasible for realistic satellite communications as seen in Saeed et al. [50]. Furthermore, the size of satellite data is of the order of kilobytes and megabytes. This is also considered feasible for satellite applications as seen in Saeed et al. [50].
The compute resource access latency is evaluated for the existing scheme without and with the use of forwarding links. The compute resource access latency (compute platform access latency) for terrestrial data centres, stratosphere-based data centres, and the SHDC without forwarding links is shown in Figure 6. The analysis shows that using HAP computing platforms and SHDC instead of terrestrial computing platforms reduces the compute resource access latency by 11.9% and 33.6% on average, respectively. In addition, accessing the SHDC instead of HAP computing platforms reduces computing resource access latency by 24.5% on average.
The results in Figure 7 and Figure 8 show the forwarding latency when the HAP computing platform and terrestrial computing platforms are accessed through HAP forwarding links respectively. Results in Figure 7 and Figure 8 shows that the forwarding latency increases with forwarding HAPs. Analysis of results in Figure 7 and Figure 8 shows that increasing the number of forwarding HAPs from 1 to 3, 1 to 2, and 2 to 3 increases the forwarding latency by {96.1%, 61.3%, 90%} and {44.7%, 35.6%, 14.2%} on average respectively. The use of the SHDC instead of HAP and terrestrial data centres via forwarding reduces the computing platform access latency. Extensive simulations show that the compute resource access latency is reduced by up to 98.5% on average.
The evaluation also investigates how the use of SHDCs enhances space segment accessible computing resources. The simulation uses test SHDCs hosting a limited number of servers and five LEO space vehicles. Two space vehicles are utilised for executing algorithms and processing data related to space astronomy. Three LEO SHDCs (each with three servers) are considered. The utilised values are in Table 4. The investigation of accessible computing resources for the existing scheme is done considering two cases. In the first case, the existing orbital edge computing [19] is considered. The second case is one in which the computing resources on space vehicles are accessed in addition to that of existing orbital edge computing.
The accessible computing resources are also investigated as an important QoS metric. The launch and use of SHDCs increases the amount of computational resources available for satellite data processing. This is because the SHDCs can host a significant number of servers in comparison to existing space edge computing platforms found in Wang et al. [19]. The accessible computing resources are investigated using the performance evaluation parameters presented in Table 4.
The result of the accessible computing resources is shown in Figure 9 and Figure 10. The simulation also investigates how the use of up to two SHDCs enhances accessible computing resources. The result in this case is shown in Figure 10. Analysis shows that using one SHDC and two SHDCs instead of existing scheme without and with space vehicles increases accessible computing resources by {65.3%, 46.7%} and {77%, 64.7%} on average, respectively. In addition, increasing the number of SHDCs from one to two improves accessible computing resources by 33.8% on average.

7. Conclusions

The discussion proposes solutions to reduce the high water footprint for cloud data centres and sites data centres in space habitats. The space habitat data centre is cooled using water mined from asteroids. The feasibility of using space habitat data centres is studied by asteroids with water content. Data analysis shows that water from asteroids can be accessed once a year. The use of space habitat data centres also increases the accessible computing resources in the space segment.

Author Contributions

Conceptualization A.P.; validation, A.A., writing—original draft and, editing—A.A. and K.O., review, project administration and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

The University of Johannesburg funded this research and APC.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Avgerinou, M.; Bertoldi, P.; Castellazi, L. Trends in Data Centre Energy Consumption under the European code of conduct for data centre energy efficiency. Energies 2017, 10, 1470. [Google Scholar] [CrossRef]
  2. Hintemann, R.; Hinterholzer, S. Energy consumption of data centers worldwide—How will the internet become green? In Proceedings of the 6th International Conference on ICT for Sustainability, Lappeenranta, Finland, 10–14 June 2019; p. 16. [Google Scholar]
  3. Coroner, C.; Ashman, M.; Nilsson, L.J. Data Centres in Future European Energy Systems—Energy efficiency, integration and policy. Energy Effic. 2020, 13, 129–144. [Google Scholar]
  4. Wang, P.; Cao, Y.; Ding, Z. Resources planning strategies for data centre micro-grid considering water footprints. In Proceedings of the IEEE Conference on Energy Internet and Energy System Integration, Beijing, China, 20–22 October 2018; pp. 1–6. [Google Scholar]
  5. Capozzoli, A.; Primiceri, G. Cooling Systems in Data Centers: State of art and emerging technologies. Energy Procedia 2015, 83, 484–493. [Google Scholar] [CrossRef] [Green Version]
  6. Amazon. Reducing Water Used for Cooling in AWS Data Centers. Available online: https://aws.amazon.com/about-aws/sustainability/ (accessed on 22 March 2020).
  7. Flucker, S.; Tozer, R.; Whitehead, B. Data Centre sustainability—Beyond energy efficiency. Build. Serv. Eng. Res. Technol. 2018, 39, 173–182. [Google Scholar] [CrossRef]
  8. Taheri, S.; Goudarzi, M.; Yoshie, O. Learning—Based power prediction for geo-distributed data centres: Weather Parameter analysis. J. Big Data 2020, 7, 1–16. [Google Scholar] [CrossRef]
  9. Li, Y.; Wen, Y.; Guan, K.; Tao, D. Transforming cooling optimization for Green Data Center via Deep Reinforcement Learning. IEEE Trans. Cybern. 2019, 50, 2002–2013. [Google Scholar] [CrossRef] [Green Version]
  10. Gough, C.; Steiner, I.; Saunders, W.A. Data center management. In Energy Efficient Servers: Blueprints for Data Center Optimization—The IT Professional’s Operational Handbook; Springer: Berlin, Germany, 2015; pp. 307–318. [Google Scholar]
  11. Zhang, Y.; Wei, Z.; Zhang, M. Free cooling technologies for data centres: Energy saving mechanism and applications. Energy Procedia 2017, 143, 410–415. [Google Scholar] [CrossRef]
  12. Le, D.V.; Li, Y.; Wang, R.; Tan, R.; Wong, Y.; Wen, Y. Control of Air Free—Cooled Data Centers in Tropics via Deep Reinforcement Learning. In Proceedings of the 6th ACM International Conference on Systems for Energy—Efficient Buildings, Cities and Transportation (BuildSys’19), New York, NY, USA, 13–14 November 2019. [Google Scholar]
  13. Periola, A. Incorporating diversity in cloud–computing: A novel paradigm and architecture for enhancing the performance of future cloud radio access networks. Wirel. Netw. 2019, 25, 3783–3803. [Google Scholar] [CrossRef]
  14. Periola, A.A.; Alonge, A.A.; Ogudo, K.A. Architecture and System Design for Marine Cloud Computing Assets. Comput. J. 2020, 63, 927–941. [Google Scholar] [CrossRef]
  15. Cutler, B.; Fowers, S.G.; Kramer, J.; Peterson, E. Dunking the data center. IEEE Spectr. 2017, 54, 26–31. [Google Scholar] [CrossRef]
  16. Huang, H.; Guo, S.; Wang, K. Envisioned Wireless Big Data Storage for Low Earth Orbit Satellite Based Cloud. IEEE Wirel. Commun. 2018, 25, 26–31. [Google Scholar] [CrossRef]
  17. Periola, A.A.; Kolawole, M.O. Space Based Data Centres: A Paradigm for Data Processing and Scientific Investigations. Wirel. Pers. Commun. 2019, 107, 95–119. [Google Scholar] [CrossRef]
  18. Donoghue, A. The Idea of Data Centers in Space Just Got a Little Less Crazy. 9 February 2019. Available online: https://www.datacentreknowledge.com/edge-computing/idea-data-centers-space-just-got-little-less-crazt (accessed on 1 March 2020).
  19. Wang, Y.; Yang, J.; Guo, X.; Qu, Z. Satellite Edge Computing for the Internet of things in Aerospace. Sensors 2019, 19, 4375. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Calla, P.; Fries, D.; Welch, C. Asteroid mining with small spacecraft and its economic feasibility. arXiv 2019, arXiv:1808.05099. Available online: https://arxiv.org/pdf/1808.05099.pdf (accessed on 27 March 2020).
  21. MacDonald, A. Emerging Space—The Evolving Landscape of 21st Century American Spaceflight; NASA Office of the Chief Technologist: Washington, DC, USA, 2014. Available online: https://www.nasa.gov/sites/default/files/files/EmergingSpacePresentation20140829.pdf (accessed on 9 April 2014).
  22. akolev, V. Mars Terraforming—The Wrong Way. LP1 Contribution No. 1989. In Proceedings of the Planetary Science Vision 2050 Worksop, Washington, DC, USA, 27 February–1 March 2017; Available online: https://www.hou.usra.edu/meetings/V2050/pdf/8010.pdf (accessed on 21 March 2020).
  23. Smitherman, D.; Griffin, B. Habitat Concepts for Deep Space Exploration. In AIAA Space 2014 Conference and Exposition; AIAA: San Diego, CA, USA, 2014; pp. 2014–4477. [Google Scholar]
  24. Griffin, B.N.; Lewis, R.; Smitherman, D. SLS–Derived Lab: Precursor to Deep Space Human Exploration. In Proceedings of the AIAA Space 2015 Conference and Exposition, Pasadena, CA, USA, 31 August–2 September 2015. [Google Scholar] [CrossRef]
  25. Smitherman, D.V.; Needham, D.H.; Lewis, R. Research Possibilities beyond the deep space gateway. LPI Contrib No. 2063. In Proceedings of the Deep Space Gateway Science Workshop, Denver, CO, USA, 27 February–1 March 2018; Available online: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180002054.pdf (accessed on 10 April 2020).
  26. Kalam, A.P.J.A. The Future of Space Exploration and Human Development. In the Pardee Papers; Boston University Creative Services: Boston, MA, USA, 2008. [Google Scholar]
  27. Lia, S.I.; Giuho, C.; Nazir, Z. Sustainable Quality: From Space Stations to Everyday Contexts on Earth: Creating Sustainable work environments. In Proceedings of the NES 2015, Nordic Ergonomics Society, 47th Annual Conference, Lillehammer, Norway, 1–4 December 2015; pp. 1–8. [Google Scholar]
  28. Musk, E. Making Humans a Multi-Planetary species. New Space 2017, 5, 46–61. [Google Scholar] [CrossRef]
  29. Morad, S.; Kalita, H.; Nallapu, R.T.; Jekan, T. Building Small Satellites to Live through the Kessler Effect. Available online: arXiv.org/pdf/1909.01342.pdf (accessed on 22 June 2020).
  30. Banik, J.; Chapman, D.; Kiefer, S.; Lacorte, P. International Space Station (ISS) Roll–Out Solar Array (ROSA) Spacefliers Experiment Mission and Results. In Proceedings of the IEEE 7th World Conference Photovoltaic Energy Conversion (WCPEC), Waikoloa Village, HI, USA, 10–15 June 2018; pp. 3524–3529. [Google Scholar]
  31. Hampson, J. The Future of Space Commercialization; Niskanen Centre Research Paper: Washington, DC, USA, 2017. [Google Scholar]
  32. Davis, A.G. Space commercialization: The Need to immediately renegotiate treaties implicating. Int. Environ. Law 2011, 3, 363–392. [Google Scholar]
  33. Gatens, R. Commercializing Low—Earth Orbit and the role of the International Space Station. In Proceedings of the 2016 IEEE Aerospace Conference, Big Sky, MT, USA, 5–12 March 2016; pp. 1–8. [Google Scholar]
  34. Rutley, T.M.; Robinson, J.A.; Gerstenmeier, W.H. The International Space Station: Collaboration, Utilization and Commercialization. Soc. Sci. Q. 2016, 98, 1160–1174. [Google Scholar] [CrossRef]
  35. Kganyago, M.; Mhangara, P. The Role of African Emerging Space Agencies in Earth Observation Capacity Building for Facilitating the Implementation and Monitoring of the African Development Agenda: The Case of African Earth Observation Program. Int. J. Geo-Inform. 2019, 8, 292. [Google Scholar] [CrossRef] [Green Version]
  36. Shammas, L.; Hohen, T.B. One giant leap for capitalist kind: Private enterprise in outer space. Palgrave Commun. 2019, 5, 1–9. [Google Scholar] [CrossRef] [Green Version]
  37. Oluwafemi, F.A.; Torre, A.; Afolayan, E.M.; Ajayi, B.M.; Dhutal, B.; Almanza, J.G.; Potrivitu, G.; Creach, J.; Rivolta, A. Space Food and Nutrition in a long term manned mission. Adv. Astronaut. Sci. Technol. 2018, 1, 1–21. [Google Scholar] [CrossRef] [Green Version]
  38. Shkolink, E.L. On the verge of an astronomy cubesat revolution. Nat. Astron. 2018, 2, 374–378. [Google Scholar] [CrossRef] [Green Version]
  39. Gallozzi, S.; Scardia, M.; Maris, M. Concerns about ground based astronomical observations: A Step to safeguard the astronomical sky. arXiv 2020, arXiv:2001.10952. [Google Scholar]
  40. Beasley, T. NRAO—Statement on Starlink and Constellations of Communications Satellites. Available online: public.nrao.edu/news/nrao-statements-commsats/ (accessed on 31 May 2019).
  41. Seitzer, P. Mega—Constellations and astronomy. In Proceedings of the IAA Debris Meeting, Washington, DC, USA, 19 October 2019. [Google Scholar]
  42. Adams, C. Will the Data Centres of the Future Be in Space? Parkplace Technologies: Mayfield Heights, OH, USA, 2020. [Google Scholar]
  43. Lai, J.; Zhang, Y.; Zhong, L.; Qu, Y.; Liu, R. Enabling Edge Computing Ability in Mobile Satellite Communication Networks. In IOP Conference Series: Materials, Science and Engineering; IOP: London, UK, 2019; Volume 685, pp. 1–8. [Google Scholar]
  44. Denby, B.; Lucia, B. Orbital Edge Computing: Machine Inference in space. IEEE Comput. Arch. Lett. 2019, 18, 59–62. [Google Scholar] [CrossRef]
  45. Straub, J.; Mohammad, A.; Berk, J.; Nervold, A.K. Above the cloud computing: Applying cloud computing principles to create an orbital services model. In Sensors and Systems for Space Applications VI; Proceedings SPIE, 8739; SPIE Press: Bellingham, WA, USA, 2013; p. 873909. [Google Scholar]
  46. Fernandez, Y.R.; Li, J.Y.; Howell, E.S.; Woodney, L.M. Asteroids and Comets in Treatise in Geophysics; Schubert, G., Spohn, T., Eds.; Elsevier: Amsterdam, The Netherlands, 2015; Volume 10. [Google Scholar]
  47. Alexander, C.M.O.D.; McKeegan, K.D.; Altwegg, K. Water Reservoirs in small planetary bodies: Meteorites, Asteroids and Comets. Space Sci. Rev. 2018, 214, 1–63. [Google Scholar] [CrossRef]
  48. Molag, K.; Winter, B.D.; Toorenburgh, Z.; Versteegh, B.G.Z.; Westrenen, W.V.; Pau, K.D.; Knecht, E.; Borsten, D.; Foing, B.H. Water–I Mission Concept: Water—Rich Asteroid Technological Extraction Research. In Proceedings of the 49th Lunar and Planetary Science Conference 2018 (LPI Contrib. No. 2083), The Woodlands, TX, USA, 19–23 March 2018. [Google Scholar]
  49. Webster, I. ‘AsteranK Database’. Available online: https://www.asterank.com (accessed on 12 April 2020).
  50. Saeed, N.; Elzanaty, A.; Almorad, H.; Dahrouj, H.; Al–Naffouri, T.Y.; Alouini, M.S. CubeSat Communications: Recent Advances and Future Challenges. IEEE Commun. Surv. Tutor. 2020. [Google Scholar] [CrossRef]
Figure 1. Relations between space habitat data centre, satellites, and the terrestrial segment.
Figure 1. Relations between space habitat data centre, satellites, and the terrestrial segment.
Symmetry 12 01487 g001
Figure 2. Steps in executing workload received from terrestrial ground station gateways (TSGWs) or space-based in low Earth orbit (LEO) satellites.
Figure 2. Steps in executing workload received from terrestrial ground station gateways (TSGWs) or space-based in low Earth orbit (LEO) satellites.
Symmetry 12 01487 g002
Figure 3. Relations between provider-specific interfaces (PSIs), space computing nodes (SCNs), and entities in the international computing space station.
Figure 3. Relations between provider-specific interfaces (PSIs), space computing nodes (SCNs), and entities in the international computing space station.
Symmetry 12 01487 g003
Figure 4. Relations between asteroid water mining entity (AWME), water storage, and access by platform service providers.
Figure 4. Relations between asteroid water mining entity (AWME), water storage, and access by platform service providers.
Symmetry 12 01487 g004
Figure 5. Relations between the AWME, space water reservoir entity (SWRE), and space water reservoir (SWR) in the supply of asteroid water for the space habitat data centre (SHDC).
Figure 5. Relations between the AWME, space water reservoir entity (SWRE), and space water reservoir (SWR) in the supply of asteroid water for the space habitat data centre (SHDC).
Symmetry 12 01487 g005
Figure 6. Simulation results for the compute resource access latency for existing and proposed cases.
Figure 6. Simulation results for the compute resource access latency for existing and proposed cases.
Symmetry 12 01487 g006
Figure 7. Simulation results for the forwarding latency in the case of accessing high-altitude platforms (HAPs).
Figure 7. Simulation results for the forwarding latency in the case of accessing high-altitude platforms (HAPs).
Symmetry 12 01487 g007
Figure 8. Forwarding latency when accessing terrestrial computing platforms by forwarding through HAPs.
Figure 8. Forwarding latency when accessing terrestrial computing platforms by forwarding through HAPs.
Symmetry 12 01487 g008
Figure 9. Accessible computing resources in the case of orbital edge computing and space vehicles.
Figure 9. Accessible computing resources in the case of orbital edge computing and space vehicles.
Symmetry 12 01487 g009
Figure 10. Accessible Computing Resources considering the use of up to two space habitat data centers.
Figure 10. Accessible Computing Resources considering the use of up to two space habitat data centers.
Symmetry 12 01487 g010
Table 1. List of water bearing asteroids and the near-Earth approach dates.
Table 1. List of water bearing asteroids and the near-Earth approach dates.
s/nNameNear-Earth Approach Dates
11991 DB6 March 2027; 29 February 2036; 17 June 2083—three approaches
2Seleucus24 March 2037; 6 April 2040; 8 May 2069; 27 March 2072–4 Approaches
31998 KU215 October 2025; 31 July 2042; 18 September 2069p; 28 June 2086; 18 October 2096—five approaches
42001 PD 13 October 2021; 23 September 2031; 1 September 2041; 1 November 2118—four approaches
51992 NA27 October 2029; 14 August 2055; 25 October 2066; 12 October 2092; 08 August 2118—five approaches
62002-AH2919 January 2032; 2 April 2047; 6 April 2062; 28 January 2092; 19 February 2107—five approaches
7David-Harvey16 December 2033; 10 December 2072; 17 December 2111—three approaches
81999 VN627 November 2031; 22 November 2047; 25 November 2056; 1 December 2072; 01 December 2088; 30 November 2104—six approaches.
92001 XS 108 December 2049; 08 December 2097—two approaches
102001 SJ26217 October 2057; 06 October 2062; 14 October 2103—three approaches
111997AQ 1811 May 2022; 21 December 2023; 16 August 2028; 14 June 2033; 16 December 2034; 1 May 2038; 30 September 2039; 31 July 2044; 30 December 2045; 31 May 2049; 14 December 2050; 26 April 2054; 18 September 2055; 5 January 2056; 24 July 2060; 28 December 2061; 27 May 2065; 14 December 2066; 26 April 2070; 18 September 2071; 5 January 2072; 27 July 2076; 29 December 2077; 1 June 2081; 15 December 2082; 30 April 2086; 28 September 2087; 1 January 2088; 6 August 2092; 17 December 2098; 7 May 2102—31 approaches
122002 DH26 July 2046; 8 April 2049; 10 Mar 2052; 22 February 2055; 6 July 2108—five approaches
13Betulia7 June 2028; 8 May 2090; 13 May 2103—three approaches
14Sigurd12 October 2022; 18 September 2027; 7 August 2032; 2 October 2045, 30 August 2050; 20 September 2068; 19 August 2073; 15 October 2086; 9 September 2091; 8 August 2096; 30 September 2109—11 approaches
151991XB30 November 2067; 28 November 2118–2 Approaches
162000-YO-2924 December 2027; 10 June 2040; 26 December 2049; 17 June 2062; 28 December 2071; 21 June 2084; 29 December 2093; 26 June 2106; 31 December 2115—eight approaches
Table 2. Access dates and intervals for water bearing asteroids.
Table 2. Access dates and intervals for water bearing asteroids.
s/nAccess DatesAccess IntervalsAccess Dates Access Intervals
13 October 2021[248;181]27 November 2031[81;227]
211 May 202219 January 2032
312 October 2022[435;688]7 August 2032[322;212]
421 December 202314 June 2033
515 October 2025[536;223]16 December 2033[390;469]
66 Mar 202716 December 2034
718 September 2027[127;192] 29 February 2036[414;431]
824 December 202724 Mar 2037
97 June 2028[100;437]1 May 2038[539;215]
1016 August 202830 September 2039
1223 September 20316 2April 204095
10 June 2040
Table 3. Compute resource access latency simulation parameters.
Table 3. Compute resource access latency simulation parameters.
S/NParameterValue
1Number of satellites in space segment10
2Number of epochs15
3Maximum Size of Data–Satellite [1, 2, 3, 4, 5][1.48, 1.47, 1.25, 1.33, 1.45] Mbytes
4Maximum Size of Data–Satellite [6, 7, 8, 9, 10][1.42, 1.47, 1.41, 1.46, 1.33] Mbytes
5Minimum Size of Data–Satellite [1, 2, 3, 4, 5][1.04, 148.9, 353.6, 317.9, 852.81] Kbytes
6Minimum Size of Data–Satellite [6, 7, 8, 9, 10][12.37, 82.31, 92.30, 113.00, 6.71] Kbytes
7Mean Size of Data–Satellite [1, 2, 3, 4, 5][866.37,777.78,567.04,917.37,811.12] Kbytes
8Mean Size of Data–Satellite [6,7, 8, 9, 10][718,665.93, 920.20, 844.61, 763.45] Kbytes
9Maximum Link Speed–Satellite [1, 2, 3, 4, 5][2.99, 2.98, 2.92, 2.93, 2.68] Gbps
10Maximum Link Speed–Satellite [6, 7, 8, 9, 10][2.82, 2.73, 2.69, 2.76, 2.83] Gbps
11Minimum Link Speed–Satellite [1, 2, 3, 4, 5][380.17, 380.57, 930, 280.8, 124.44] Mbps
12Minimum Link Speed–Satellite [6 7 8 9 10][32.67, 237.97, 776.48, 107.31,156.90] Mbps
13Mean Link Speed–Satellite [1, 2, 3, 4, 5][1.47, 1.87, 2.01, 1.69, 1.28] Gbps
14Mean Link Speed–Satellite [6, 7, 8, 9, 10][1.395, 1.49, 1.26, 1.25, 1.35] Gbps
15Altitude–Satellite [1, 2, 3, 4, 5][780.5, 791.0, 711.6, 631.04, 766.63] km
16Altitude–Satellite [6, 7, 8, 9, 10][798.7, 760.4, 614.1, 574.4, 759.9] km
21Number of Space Habitat Data Centers3
22Space Habitat Data Center Altitude–[1, 2, 3][921.1, 252.5, 383.7] km
23Number of High Altitude Platforms4
24High Altitude Platform Altitude (Mesosphere)–[1, 2, 3, 4][25.4, 74.1, 67.5, 71.9] km
25Size of Data Transmitted by High Altitude Platforms–[1, 2. 3, 4][1.4, 0.997, 0.45, 1.96] Mbps
26Inter–High Altitude Platform Link Speed[1.92, 0.69, 0.08] Mbps
Table 4. Simulation parameters for investigating accessible computing resources.
Table 4. Simulation parameters for investigating accessible computing resources.
S/NParameterValue
1Number of Satellites10
2Number of Epochs15
3Number of Space Vehicles and Space Habitat Data Centres5, 3
4Maximum Satellite Computational Resources [1, 2, 3, 4, 5][85.7, 90.2, 97.7, 97.2,96.9] Gbytes
5Maximum Satellite Computational Resources [6, 7, 8, 9, 10][96.2, 94.0, 84.8,90.7, 88.7] Gbytes
6Minimum Satellite Computational Resources [1, 2, 3, 4, 5][1,1.76, 13.9, 0.32, 4.94] Gbytes
7Minimum Satellite Computational Resources [6, 7, 8, 9, 10][7,1,1.02,1.27,0.45] Gbytes
8Mean Computational Resources on Satellites [1, 2, 3, 4, 5][52.6, 52.4, 58.4, 39.2, 61.3] Gbytes
9Mean Computational Resources on Satellites [6, 7, 8, 9, 10][50.3, 38.7, 43.2, 39.3, 57.0] Gbytes
10Number of servers on Space Habitat Data Centres 1, 2, 33 Servers per SHDC
11Computing Capability of Servers on SHDC 1-[1, 2, 3][65.5, 22.3, 50.1] Gbytes
12Computing Capability of Servers on SHDC 2-[1, 2, 3][43.9, 24.3, 40.8] Gbytes
13Computing Capability of Servers on SHDC 3-[1, 2, 3][11.0, 5.7, 42.0] Gbytes
14Compute utilization of Servers on SHDC 1-[1, 2, 3][36.3%, 21.3%, 7%]
15Compute utilization of Servers on SHDC 2-[1, 2, 3][47.7%, 72.5%, 24.8%]
16Compute utilization of Servers on SHDC 3-[1, 2, 3][18.8%, 5.2%, 43.6%]
17Compute Resources on Space Vehicles-[1, 2, 3, 4, 5][49.8, 32.6, 72.2, 7.5, 51.1] Gbytes
18Space Vehicle 1 Fully utilised (No computing resources)Yes
19Space Vehicle 2 Fully utilised (No computing resources)Yes
20Space Vehicle 3 Fully utilised (No computing resources)No
21Space Vehicle 4 Fully utilised (No computing resources)No
22Space Vehicle 5 Fully utilised (No computing resources)No
23Compute Resource Utilisation on Space Vehicles-[3, 4, 5][63.8%, 37.2%, 28.8%]

Share and Cite

MDPI and ACS Style

Periola, A.; Alonge, A.; Ogudo, K. Space Habitat Data Centers—For Future Computing. Symmetry 2020, 12, 1487. https://doi.org/10.3390/sym12091487

AMA Style

Periola A, Alonge A, Ogudo K. Space Habitat Data Centers—For Future Computing. Symmetry. 2020; 12(9):1487. https://doi.org/10.3390/sym12091487

Chicago/Turabian Style

Periola, Ayodele, Akintunde Alonge, and Kingsley Ogudo. 2020. "Space Habitat Data Centers—For Future Computing" Symmetry 12, no. 9: 1487. https://doi.org/10.3390/sym12091487

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop