跳转至

EVALUATION

In this section, we conduct experiments to evaluate multiple aspects of STAR FRONT, including its performance under various configurations and corresponding cost. As most emerging mega-constellations are still in their early stage, it is difficult to conduct experiments on live satellite networks. While there are many prior efforts for network simulation or emulation, existing works either fail to simulate/emulate the high dynamics of LEO satellite (e.g., NS3, Mininet) or they can not support evaluation of content distribution as in real deployments (e.g., [41], [45]). To address such limitations of previous evaluation methodology, we build a testbed to simulate geo-distributed cloud data centers and constellations based on the public orbital data, and implement a prototype of STAR FRONT. We further conduct trace-driven simulations to verify the effectiveness of STAR FRONT.

在本节中,我们将通过实验来评估 STAR FRONT 的多个方面,包括其在不同配置下的性能和相应的成本。由于大多数新兴巨型星座仍处于早期阶段,因此很难在实时卫星网络上进行实验。虽然之前已经进行了许多网络模拟或仿真工作,但现有的工作要么无法模拟/仿真 LEO 卫星的高动态(例如 NS3、Mininet),要么无法支持像实际部署中那样对内容分发进行评估(例如 [41]、[45])。为了解决先前评估方法的这些局限性,我们基于公共轨道数据构建了一个测试平台来模拟地理分布式云数据中心和星座,并实现了 STAR FRONT 的原型。我们进一步进行跟踪驱动的模拟以验证 STAR FRONT 的有效性。

A. Simulated Satellite-Cloud Integrated Network

We first build a testbed to simulate the satellite-cloud network which supports repeatable experiments driven by realistic cloud and satellite traces. At a high-level, as shown in Figure 8, our testbed incorporates a topology generator which loads the information of cloud distributions as well as time-varying satellite trajectories to generate the satellite-cloud network topology. Further, the testbed exploits a number of containers running on top of physical machines to simulate the software behaviors of content distribution (e.g., receiving user requests, querying the cache for required data and sending responses back to users).

我们首先构建一个测试平台,以模拟支持可重复实验的卫星-云网络,该实验由现实的云和卫星轨迹驱动。高层次上,如图8所示,我们的测试平台包含一个拓扑生成器,该生成器加载云分布的信息以及时变的卫星轨迹,以生成卫星-云网络拓扑。此外,测试平台利用在物理机器上运行的多个容器来模拟内容分发的软件行为(例如,接收用户请求、查询缓存以获取所需数据并将响应发送回用户)。

alt text

Topology generator. The topology generator guides the simulation of the satellite-cloud integrated architecture as follows. First, it uses the distribution of Amazon AWS cloud sites as the available cloud data centers [21]. We configure the cloud distribution based on Amazon AWS as it has deployed a large number of world-wide cloud sites and recently is deploying ground station services to interconnect clouds and satellites. Second, the topology generator calculates the timevarying satellite trajectory, which includes the LLA positions (i.e., latitude, longitude and altitude) of each satellite in every time slot. Specifically, the trajectory information is calculated based on third-party orbit computation tool using the twoline element (TLE) data generated by [14], and is used to estimate the distance and visibility of each satellite from the view of other nodes (e.g., neighbor satellites, ground stations, or terrestrial users).

In our experiment, we evaluate STAR FRONT under three state-of-the-art constellations: SpaceX’s Starlink [17], OneWeb [10], and Amazon Kuiper [2]. All these constellations plan to deploy hundreds to thousands of LEO satellites to provide wide-area coverage and Internet service. In particular, we evaluate STAR FRONT under the configuration of: (i) the first shell of Starlink Phase-I, which has deployed 1584 LEO satellites in 72 orbital planes with an altitude of about 550km; (ii) OneWeb, which is a planned initial 648-satellite constellation at approximately 1200km altitude; (iii) the first shell of Project Kuiper, which plans to deploy a large broadband satellite Internet constellation to provide broadband Internet connectivity. Table II summarizes the primary parameters of each constellation in details. The synodic period of Starlink, OneWeb and Kuiper are configured as 5731s, 6557s and 5831s respectively, based on their public constellation information. Finally, we set the connectivity of each node in the topology based on their related visibility, i.e., a satellite is connectable for a ground station if the satellite moves into the transmission range.

拓扑生成器。拓扑生成器指导卫星-云集成架构的仿真,具体如下。首先,它使用亚马逊AWS云站点的分布作为可用的云数据中心。我们根据亚马逊AWS配置云分布,因为它在全球范围内部署了大量云站点,并且最近正在部署地面站服务以连接云和卫星。其次,拓扑生成器计算时变的卫星轨迹,包括每个卫星在每个时间段的LLA位置(即纬度、经度和高度)。具体而言,轨迹信息是基于第三方轨道计算工具使用由TLE数据生成的来估算每颗卫星与其他节点(例如邻近卫星、地面站或地面用户)之间的距离和可见性。

在我们的实验中,我们评估了STAR FRONT在三种先进星座下的表现:SpaceX的Starlink、OneWeb和亚马逊Kuiper。这些星座计划部署数百到数千颗低地球轨道(LEO)卫星,以提供广域覆盖和互联网服务。特别是,我们在以下配置下评估STAR FRONT:(i)Starlink Phase-I的第一层,已部署1584颗LEO卫星,分布在72个轨道平面,轨道高度约为550公里;(ii)OneWeb,计划初始648颗卫星星座,轨道高度约为1200公里;(iii)Project Kuiper的第一层,计划部署一个大型宽带卫星互联网星座,以提供宽带互联网连接。表II总结了每个星座的主要参数。根据它们公开的星座信息,Starlink、OneWeb和Kuiper的合成周期分别配置为5731秒、6557秒和5831秒。最后,我们根据相关可见性设置了拓扑中每个节点的连接性,即当卫星进入传输范围时,它可以连接到地面站。

Clouds and satellites simulation. As shown in Figure 8, we use Docker containers [7] running on physical machines to support the simulation of cloud/satellite-based cache servers. Specifically, we run a number of Docker containers on each physical machine, and use each container which has installed realistic network stack and CDN software to simulate available cloud or satellite cache servers. Containers are connected to the physical NIC using macvlan [9], which virtualizes a physical NIC into multiple virtual NICs. We use tc to control the time-varying RTT, inter-satellite/ground-satellite connectivity and bandwidth of each link. Inter-cloud network conditions are configured based on the measured values from realistic AWS cloud sites. The inter-satellite and ground-satellite connectivity and network performance are configured based on the results characterized based on a recent constellation simulator [45]. In particular, each satellite connects to two adjacent satellites in the same orbit, and to other two in adjacent orbits. A satellite can be connected to a ground station if only it moves into the transmission range of the ground station.

云与卫星仿真。如图8所示,我们使用Docker容器在物理机器上支持基于云/卫星的缓存服务器仿真。具体而言,我们 在每台物理机器上运行多个Docker容器,每个容器安装有真实的网络堆栈和CDN软件,以模拟可用的云或卫星缓存服务器 。容器通过macvlan连接到物理网卡,这样可以 将物理网卡虚拟化为多个虚拟网卡 。我们使用tc控制时变的往返时间(RTT)、星间/地面-卫星连接性和每个链路的带宽。基于从现实AWS云站点测量值配置了云间网络条件。基于最近的星座模拟器所表征的结果配置了星间和地面-卫星连接性及网络性能。具体而言,每颗卫星连接到同一轨道上的两颗相邻卫星,以及相邻轨道上的另外两颗卫星。当一颗卫星进入地面站的传输范围时,它可以与该地面站连接。

Dataset and request generator. We use a real-world CDN trace collected from a commercial cloud CDN operator on February 22-28, 2015, containing 3.9 million flow records in total to drive the evaluation. Table III describes the details of the selected trace. We have published the CDN trace at: https://github.com/SpaceNetLab/StarFront, and hope it can stimulate more studies focusing on efficient content distribution in futuristic space-terrestrial integrated networks. We implement a request generator to simulate the behavior of user clients. It extracts information from the CDN trace and generates HTTP requests to fetch object data. Each HTTP request issued by end users is processed as follows. First, the user issues a DNS query to the location DNS server. Second, the DNS server returns the IP address of the assigned cache server (one of the Docker container) to the user. Finally the client sends a request to the assigned cache server to fetch the concrete content data.

数据集与请求生成器。我们使用从一家商业云CDN运营商收集到的真实CDN追踪数据,该数据收集于2015年2月22日至28日,总共包含390万个流记录,以驱动评估。表III描述了所选追踪数据的详细信息。我们已将CDN追踪数据发布在:https://github.com/SpaceNetLab/StarFront,并希望能够激发更多关于未来空间-地面集成网络中高效内容分发研究的工作。我们实现了一个请求生成器,以模拟用户客户端的行为。它从CDN追踪数据中提取信息并生成HTTP请求以获取对象数据。每个终端用户发出的HTTP请求处理流程如下:首先,用户向位置DNS服务器发出DNS查询;其次,DNS服务器返回分配给缓存服务器(Docker容器之一)的IP地址;最后,客户端向分配给它的缓存服务器发送请求以获取具体内容数据。

B. STAR FRONT Prototype

STAR FRONT controller. The controller of STAR FRONT is implemented in around 1100 lines of Python codes. Periodically, the controller reads the satellite location information and the historical network performance information, and calculates the decisions for content placement and request assignment. The time-varying satellite location information are predicted based on third-party orbit computation tool (e.g., STK [13]). Content replicas are then pushed to the cache servers via HTTP connections, following the calculated decision. Further, we configure the pricing model in our experiment following the two representative models, i.e., linear and concave models, as described in §IV-B. Specifically, the pricing policy of cloud caches is configured based on existing cloud providers (e.g., CloudFront [1]). Because satellite cache service is still under development and currently there are no practical commercial pricing models, we define the per-unit storage cost on satellite cache γ × of that in cloud cache, where γ ≥ 1 indicates space resource is more precious than that in terrestrial clouds as illustrated in Eq (7).

STAR FRONT 控制器。STAR FRONT 的控制器使用大约 1100 行 Python 代码实现。

控制器会定期读取卫星位置信息和历史网络性能信息,并计算内容放置和请求分配的决策。时变的卫星位置信息基于第三方轨道计算工具(例如,STK)进行预测。

然后,按照计算出的决策,通过 HTTP 连接将内容副本推送到缓存服务器。

此外,我们在实验中按照线性模型和凹模型这两种代表性模型配置定价模型,如 §IV-B 中所述。具体而言,云缓存的定价策略是根据现有的云提供商(例如,CloudFront)进行配置的。由于卫星缓存服务仍在开发中,目前还没有实际的商业定价模型,因此我们定义卫星缓存上每单位存储成本为云缓存的 γ 倍,其中 γ ≥ 1 表示空间资源比地面云中的资源更宝贵,如公式 (7) 所示。

STAR FRONT cache servers. We have implemented the STAR FRONT cache servers based on Apache Traffic Server (ATS) [4]. ATS is a multi-threaded, event-based, modular, high-performance cache and forward proxy server, written in C++. ATS is distributed as a commercial product and has been used in many production-level systems. We extended ATS and connect each ATS-based cache server to the STAR FRONT controller and execute the placement decision.

STAR FRONT 缓存服务器。我们基于 Apache Traffic Server (ATS) 实现了 STAR FRONT 缓存服务器。ATS 是一个用 C++ 编写的多线程、基于事件、模块化、高性能的缓存和转发代理服务器。ATS 作为一个商业产品分发,并已在许多生产级系统中使用。我们扩展了 ATS,并将每个基于 ATS 的缓存服务器连接到 STAR FRONT 控制器,并执行放置决策。

Next, evaluations in this section aim at answering the following two questions: (i) can STAR FRONT satisfy various latency requirements of geo-distributed users as compared to other state-of-the-art content distribution approaches under representative CDN traces and constellation patterns? and (ii) what is the corresponding cost of using STAR FRONT?

接下来,本节中的评估旨在回答以下两个问题:

(i)与代表性的 CDN 跟踪和星座模式下其他最先进的内容分发方法相比,STAR FRONT 是否可以满足地理分布式用户的各种延迟要求

(ii)使用 STAR FRONT 的 相应成本 是多少?

C. Verifying STAR FRONT’s Ability of Satisfying Various Latency Requirements

First, we verify STAR FRONT’s ability of satisfying various latency requirements. We start our evaluation by examining the effectiveness of different algorithms incorporated in STAR FRONT: OffPA (Algorithm 1), OnPA (Algorithm 2) and PAOA (Algorithm 3). For comparison, we also build an offline algorithm based on the greedy maximization of submodular functions [44], denoted as GMSF. In particular, GMSF greedily exploits a utility function which calculates the ratio of the amount of requests a cache server can serve, to the corresponding assignment cost generated by Eq.(13). GMSF greedily selects the cache server to place contents and make request assignment until all requests have been assigned. Figure 9 plots the latency statistic of different algorithms under various RTT requirements in the Starlink constellation. Since OffPA and GMSF know the arrival time of all user requests and distribute contents on proper cloud or satellite cache servers in advance, they satisfy various RTT requirements for geo-distributed user requests. OffPA accomplishes slightly lower latency as compared with GMSF when the latency requirement increases. Although accurately obtaining the request arrival pattern is difficult in practice, our OffPA demonstrates the effectiveness of STAR FRONT in the (theoretically) ideal case. Because OnPA makes content placement and request assignment decisions in real-time, it may suffer from cache miss, i.e., user requests are assigned to a nearby server that has not cached the desired contents. A cache miss occurs if all currently selected cache servers can not satisfy the RTT requirement. OnPA has to open a new server close to the user and push contents to it. Therefore, OnPA results in a very long tail as shown in Figure 9. In particular, about 28.41%/11.09%/7.67%/6.20%/3.32% of the total user requests can not meet the RTT requirements of 10ms/30ms/50ms/70ms/100ms respectively, as pushing contents to a new cache server involves additional delay. PAOA exploits the historical information of user requests, and pre-allocates contents close to these regions that used to issue requests to fetch contents. By pre-allocating contents on cache servers, it significantly reduces cache misses, and about 3.18%/3.87%/3.74%/3.00%/0.76% of the total user requests can not meet the RTT requirements of 10ms/30ms/50ms/70ms/100ms respectively.

Next we compare the latency reduction on global content distribution by different strategies: (i) the state-of-the-art lowlatency content placement and assignment scheme in existing cloud-based CDNs (denoted as Cloud-based SoA) (e.g., TailCutter [28], GRP [27], CosTLO [53]); (ii) STAR FRONT, our proposed framework that judiciously exploits cloud and satellite servers to place contents and assign user requests to proper cache servers to satisfy the latency requirements of various applications, while minimizing the operational cost. For comparison, here we evaluate STAR FRONT with PAOA because other comparison approaches are online practical strategies. In addition, to comprehensively understand the incremental effectiveness of integrating LEO satellites to existing cloud-based CDNs, we evaluate STAR FRONT under two specific configurations: (iii) replicas are cached on cloud servers, and users can only access clouds via satellite paths (i.e., C = ∅, S = ∅ but CS = ∅ in Section IV); and ̸ (iv) replicas are cached on cloud servers, and can be fetched by user through terrestrial or satellite paths (i.e., S = ∅, but C = ∅ and CS = ∅ in Section IV). Strategy (iii) and (iv) ̸ ̸ refer to the methodology that only exploits satellite networks to extend the connectivity of terrestrial clouds, without using the storage capability of satellites to cache contents in space. We denote (iii) as cloud cache accessed by satellite paths (i.e., CCS) and denote (iv) as cloud cache accessed by terrestrial and satellite paths (i.e., CCTS).

Figure 10 plots the CDF of user-perceived RTTs of different content distribution strategies under various latency requirements. The results of STAR FRONT are obtained under the configuration of Starlink constellation. Results with OneWeb and Kuiper are similar and omitted due to the page limit. Since STAR FRONT integrates clouds and satellites to store and distribute content globally, it outperforms the state-of-the-art cloud-based strategy by 90.51%/66.63%/52.82%/35.62%/15.45% on average, under the RTT requirements 10ms/30ms/50ms/70ms/100ms respectively. More specifically, we make several observations. First, for stringent RTT requirements (e.g., ≤ 10ms), exploiting the satellite network to accelerate cloud access and even directly provide cache in space can significantly improve the ability to satisfy the latency requirement for wide-area user requests. This is because incorporating LEO satellites complements terrestrial CDNs and enables low-latency access to cloud and satellite servers from a global perspective. Second, as the required RTT increases (e.g., 10ms → 100ms), the latency performance of STAR FRONT is getting closer to the cloudbased-only approach. This result indicates that under a loose latency constraint STAR FRONT preferably uses more cloudbased resources to save operational costs involved by satellites. Third, caching on satellites can further help reduce the latency, but should inevitably involve much more operational costs. On our further analysis, we find that LEO satellites are more suitable to cache contents that will be requested by international users. This is because LEO satellites inherently have high dynamics, and satellite cache with regional contents may suffer from low cache utilization as it orbits the earth in high velocity.

Further, we turn our focus to the RTTs on a set of geodistributed regions. Figure 11 shows the latency comparison under different content distribution strategies calculated from a collection of vantage points, which are populated cities or areas around the world. We observe that the latency gain achieved by STAR FRONT differs in different regions. For users in remote or under-developed areas, STAR FRONT can achieve much more latency improvement since the cloud deployment and terrestrial network infrastructure in these regions might be underserved, and LEO satellites extend the availability and performance of terrestrial cloud platforms. Specifically, STAR FRONT reduces more than 90% RTT for users in remote regions such as Papeete and Majuro, as compared to the cloudonly strategy. For users in populated areas like Kansas City and Chengdu, all strategies achieve comparable latency results due to the sufficient deployment of nearby cloud infrastructures.

首先,我们验证 STAR FRONT 满足各种延迟需求的能力。我们首先评估 STAR FRONT 中不同算法的有效性:OffPA(算法 1)、OnPA(算法 2)和 PAOA(算法 3)。为了进行比较,我们还构建了一个基于子模函数贪婪最大化的离线算法,表示为 GMSF。具体而言,GMSF 贪婪地利用一个效用函数,该函数计算缓存服务器可以服务的请求量与公式 (13) 生成的相应分配成本的比率。GMSF 贪婪地选择缓存服务器来放置内容并进行请求分配,直到所有请求都已分配完毕。

图 9 绘制了在 Starlink 星座中不同算法在各种 RTT 要求下的延迟统计。由于 OffPA 和 GMSF 知道所有用户请求的到达时间,并提前在适当的云或卫星缓存服务器上分发内容 ,因此它们满足地理分布式用户请求的各种 RTT 要求。当延迟要求增加时,与 GMSF 相比,OffPA 完成的延迟略低。尽管在实践中准确获取请求到达模式很困难,但我们的 OffPA 证明了 STAR FRONT 在(理论上的)理想情况下的有效性。

由于 OnPA 实时做出内容放置和请求分配决策,因此它可能会遭受缓存未命中 ,即用户请求被分配到未缓存所需内容的附近服务器。如果所有当前选择的缓存服务器都不能满足 RTT 要求,则会发生缓存未命中。OnPA 必须打开一个靠近用户的新服务器并将内容推送到该服务器。因此,如图 9 所示,OnPA 导致非常长的尾部 。特别是,大约 28.41%/11.09%/7.67%/6.20%/3.32% 的总用户请求无法满足 10 毫秒/30 毫秒/50 毫秒/70 毫秒/100 毫秒的 RTT 要求,因为将内容推送到新的缓存服务器会涉及额外的延迟。

PAOA 利用用户请求的历史信息,并将内容预先分配到过去经常发出请求以获取内容的区域附近的区域。通过 在缓存服务器上预先分配内容,它可以显着减少缓存未命中 ,并且大约 3.18%/3.87%/3.74%/3.00%/0.76% 的总用户请求无法满足 10 毫秒/30 毫秒/50 毫秒/70 毫秒/100 毫秒的 RTT 要求。

alt text

接下来,我们比较不同策略对全球内容分发的延迟降低:

(i)现有基于云的 CDN 中最先进的低延迟内容放置和分配方案(表示为基于云的 SoA)(例如,TailCutter、GRP、CosTLO);

(ii)STAR FRONT,我们提出的框架,它明智地利用云和卫星服务器来放置内容并将用户请求分配到适当的缓存服务器,以满足各种应用程序的延迟要求,同时最大限度地降低运营成本。为了进行比较,这里我们使用 PAOA 评估 STAR FRONT,因为其他比较方法是在线实用策略。此外,为了全面了解将 LEO 卫星集成到现有基于云的 CDN 中的增量有效性,我们在两个特定配置下评估 STAR FRONT:

(iii)副本缓存在云服务器上,用户只能通过卫星路径访问云(即,在第 IV 节中 C = ∅, S = ∅ 但 CS = ∅);

以及 (iv) 副本缓存在云服务器上,并且可以通过地面或卫星路径被用户获取(即,在第 IV 节中 S = ∅,但 C = ∅ 且 CS = ∅)。

策略 (iii) 和 (iv) 指的是仅利用卫星网络来扩展地面云的连接性,而不使用卫星的存储能力来缓存空间中的内容的方法。我们将 (iii) 表示为通过卫星路径访问的云缓存(即,CCS),并将 (iv) 表示为通过地面和卫星路径访问的云缓存(即,CCTS)。

图 10 绘制了在各种延迟要求下不同内容分发策略的用户感知 RTT 的 CDF。STAR FRONT 的结果是在 Starlink 星座的配置下获得的。OneWeb 和 Kuiper 的结果相似,由于页面限制而省略。由于 STAR FRONT 集成了云和卫星以在全球范围内存储和分发内容,因此在 RTT 要求分别为 10 毫秒/30 毫秒/50 毫秒/70 毫秒/100 毫秒的情况下,它平均优于最先进的基于云的策略 90.51%/66.63%/52.82%/35.62%/15.45%。更具体地说,我们做出了以下几点观察。首先,对于严格的 RTT 要求(例如,≤ 10 毫秒),利用卫星网络来加速云访问甚至直接在空间中提供缓存可以显着提高满足广域用户请求延迟要求的能力。这是因为整合 LEO 卫星补充了地面 CDN,并从全球角度实现了对云和卫星服务器的低延迟访问。其次,随着所需 RTT 的增加(例如,10 毫秒 → 100 毫秒),STAR FRONT 的延迟性能越来越接近仅基于云的方法。该结果表明,在宽松的延迟约束下,STAR FRONT 优选使用更多的基于云的资源,以节省卫星所涉及的运营成本。第三,在卫星上进行缓存可以进一步帮助减少延迟,但不可避免地会涉及更多的运营成本。在我们的进一步分析中,我们发现 LEO 卫星更适合缓存国际用户将请求的内容。这是因为 LEO 卫星本身具有高动态性,并且具有区域内容的卫星缓存可能由于其以高速绕地球运行而导致低缓存利用率。

alt text

此外,我们将注意力转移到一组地理分布式区域的 RTT 上。图 11 显示了从世界各地的一组有利位置(即人口稠密的城市或地区)计算的不同内容分发策略下的延迟比较。我们观察到,STAR FRONT 实现的延迟增益在不同地区有所不同。对于偏远或欠发达地区的用户,STAR FRONT 可以实现更大的延迟改善,因为这些地区的云部署和地面网络基础设施可能服务不足,并且 LEO 卫星扩展了地面云平台的可用性和性能。具体而言,与仅基于云的策略相比,STAR FRONT 为帕皮提和马朱罗等偏远地区的用户减少了 90% 以上的 RTT。对于堪萨斯城和成都等人口稠密地区的用户,由于附近云基础设施的充分部署,所有策略都实现了相当的延迟结果。

alt text

D. Latency Reduction Under Various Replica Sizes

Replicas from the content providers may have different object sizes (i.e., W k defined in §IV-B) in practice, according to the concrete application type (e.g., static texts, files or video clips). We assess the content access latency which indicates how fast the requested object can be delivered to users under various replica sizes. Figure 12 depicts the userperceived latency achieved by different content distribution strategies, with various configurations of the average replica size. As shown in Figure 12, we observe that smaller requests with smaller object size prone to achieve more latency reduction, as compared with other larger requests. The major benefit of STAR FRONT is that STAR FRONT exploits emerging LEO satellites to push contents more close to users and realize lower client-to-content RTT. On our deeper analysis, we find that for small requests the total content access latency is dominated by the RTT. Since the content access latency is jointly affected by the achievable throughput and RTT, small requests are responded faster under low RTT situations.

Further, to understand how different distribution strategies affect high-level QoE metrics, we evaluate the page load time for representative web traffic under the four strategies. In particular, Figure 13 shows the page load time of visiting different websites when leveraging clouds or satellites to distribute their web contents. In each workload, the user’s browser issues a bunch of requests to fetch web contents, including images, texts, CSS files and other related objects constructing the web page. Here we calculate the page load time as the time of completely loading all required elements in the web page. STAR FRONT reduces the page load time as compared with the cloud-based CDN by up to 88.93%, and by 78.99% on average. Moreover, we observe that STAR FRONT achieves higher latency gain for web pages that contain smaller elements, since their page load time are dominated by client-to-server RTTs.

根据具体的应用类型(例如,静态文本、文件或视频剪辑),来自内容提供商的副本在实践中可能具有不同的对象大小(即,§IV-B 中定义的 Wk)。我们评估内容访问延迟,该延迟表明在各种副本大小下,请求的对象可以多快地传递给用户。图 12 描绘了不同内容分发策略在各种平均副本大小配置下实现的用户感知延迟。

如图 12 所示,我们观察到与其他更大的请求相比, 具有较小对象的请求更容易实现更大的延迟降低 。STAR FRONT 的主要优点是 STAR FRONT 利用新兴的 LEO 卫星将内容推送到更靠近用户的位置,并实现更低的客户端到内容 RTT。在我们更深入的分析中, 我们发现对于小请求,总内容访问延迟主要由 RTT 决定 。由于内容访问延迟受到可实现的吞吐量和 RTT 的共同影响,因此在低 RTT 情况下,小请求的响应速度更快。

alt text

此外,为了了解不同的分发策略如何影响高级 QoE 指标,我们评估了四种策略下代表性 Web 流量的页面加载时间。特别是,图 13 显示了在利用云或卫星分发其 Web 内容时访问不同网站的页面加载时间。在每个工作负载中,用户的浏览器都会发出大量请求以获取 Web 内容,包括图像、文本、CSS 文件和其他相关对象来构建网页。这里,我们将页面加载时间计算为完全加载网页中所有必需元素的时间。与基于云的 CDN 相比,STAR FRONT 将页面加载时间最多减少了 88.93%,平均减少了 78.99%。此外,我们观察到 STAR FRONT 对于包含较小元素的网页实现了更高的延迟增益,因为它们的页面加载时间主要由客户端到服务器的 RTT 决定

alt text

E. Latency Reduction Under Different Constellations

Next we examine the latency reduction under different constellation patterns. Specifically, in our experiment we compare the latency under three state-of-the-art constellations and their combination as illustrated in Table II. Figure 14 shows the latency results under different constellation patterns. We find that STAR FRONT associated with Starlink can achieve lower latency as compared with OneWeb and Kuiper. The reason is threefold. First, OneWeb and Kuiper satellite constellations are working on a higher altitude as compared with Starlink, and thus they suffer from higher propagation delay when working as a cache server or providing network connectivity to a terrestrial cloud. Second, OneWeb satellites do not have inter-satellite data links, hence it limits the latency improvement when using satellite to construct space routes to extend the accessibility of cloud servers. Third, the Starlink constellation consists of more LEO satellites than OneWeb and Kuiper. If those satellites in mega-constellations have adequate storage capability to cache contents, STAR FRONT associated with Starlink can obtain higher latency gain since a denser constellation enables more cache servers and diverse low-latency space routes for fetching contents from a cloud. In addition, we denote the situation in which STAR FRONT can combine and leverage all satellite caches in all three constellations as “STAR FRONT-Combination”. Results in Figure 14 demonstrate that if all constellations collaborate to construct a large cache network in space, more latency reduction can be achieved as compared with a single constellation, since more available LEO satellites in different constellations extend the availability of low-latency caches.

接下来,我们研究不同星座模式下的延迟降低情况。具体而言,在我们的实验中,我们比较了表 II 所示的三种最先进的星座及其组合下的延迟。图 14 显示了不同星座模式下的延迟结果。我们发现,与 OneWeb 和 Kuiper 相比,与 Starlink 相关的 STAR FRONT 可以实现更低的延迟。

原因有三方面。

首先,与 Starlink 相比,OneWeb 和 Kuiper 卫星星座在更高的轨道高度上运行 ,因此当它们用作缓存服务器或为地面云提供网络连接时,会受到更高的传播延迟的影响。

其次,OneWeb 卫星没有星间数据链路,因此限制了使用卫星构建空间路由以扩展云服务器可访问性时延迟的改善。

第三,Starlink 星座比 OneWeb 和 Kuiper 包含更多的 LEO 卫星。如果大型星座中的那些卫星具有足够的存储能力来缓存内容,则与 Starlink 相关的 STAR FRONT 可以获得更高的延迟增益, 因为更密集的星座可以实现更多的缓存服务器和各种低延迟空间路由 ,以从云中获取内容。

此外,我们将 STAR FRONT 可以组合和利用所有三个星座中的所有卫星缓存的情况表示为“STAR FRONT-组合”。图 14 中的结果表明, 如果所有星座协同构建一个大型空间缓存网络,则与单个星座相比,可以实现更大的延迟降低 ,因为不同星座中更多可用的 LEO 卫星扩展了低延迟缓存的可用性。

alt text

F. Resilience Analysis

LEO satellites are operated in an intermittent, errorprone outer space environment. Network links, including both ground-satellite and inter-satellite links, are likely to be interrupted due to a series of complex factors, such as various interference caused by space radiation, debris hazards, poor weather conditions, etc. We conduct an experiment to evaluate the resilience of STAR FRONT under various failures. In particular, we randomly create link failures in the mega-constellation, and define failure rate as the ratio of the amount of disrupted links to the amount of total links. Figure 15 plots the latency results under various failure ratios. On one hand, when an unexpected failure occurs, our online algorithm immediately performs local re-calculation, and reassigns user requests to another available cache node with the minimal access latency. On the other hand, the integration of mega-constellation and geo-distributed terrestrial clouds enables a number of backup cache servers to cope with disruptions. Specifically, when the failure rate increases to 15%/30%/50%, the average content access latency increases by 0.24%/0.06%/81.91% (note the log scale in x-axis and long tail). The latency does not increase much when the failure rate is low, because there are many redundant satellite paths and available cache servers in the large-scale mega-constellation.

LEO 卫星在间歇性的、容易出错的太空环境中运行。由于一系列复杂因素,包括空间辐射引起的各种干扰、碎片危害、恶劣天气条件等,网络链路(包括地面-卫星链路和星间链路)可能中断。我们进行了一项实验,以评估 STAR FRONT 在各种故障下的弹性。

特别是,我们随机地在大型星座中创建链路故障,并将故障率定义为中断链路的数量与总链路数量的比率。图 15 绘制了各种故障率下的延迟结果。

alt text

一方面,当发生意外故障时,我们的 在线算法会立即执行本地重新计算 ,并将用户请求重新分配给另一个具有最小访问延迟的可用缓存节点。

另一方面,大型星座和地理分布式地面云的集成使 大量的备份缓存服务器能够应对中断 。具体而言,当故障率增加到 15%/30%/50% 时,平均内容访问延迟增加 0.24%/0.06%/81.91%(请注意 x 轴上的对数刻度和长尾)。当故障率较低时,延迟不会增加太多,因为在大型大型星座中存在许多冗余卫星路径和可用的缓存服务器。

G. Cost Analysis

Finally, we evaluate the operational cost under different content distribution strategies, and with different pricing models as defined in §IV-B. Table IV summarizes the cost breakdown, i.e., the storage and bandwidth cost consumed by satellite and cloud cache servers. We have made several observations. First, the linear and concave storage pricing model present similar cost results. We find that this is because during the content distribution process, the bandwidth consumption consumes a large fraction of the entire operational cost. Second, the cost increases, as we enlarge the cost coefficient α and β . As compared to cloud-only strategies, STAR FRONT achieves lower latency at the cost of more content distribution fees. While incorporating satellites to distribute contents globally could be more expensive, the cost might not be unacceptable, and should be worth especially for high-priority regions or tasks with very stringent latency requirements.

最后,我们评估了不同内容分发策略下以及 §IV-B 中定义的不同定价模型的运营成本。表 IV 总结了成本细分,即卫星和云缓存服务器消耗的存储和带宽成本。我们有几个观察结果。首先,线性和凹存储定价模型呈现出相似的成本结果。我们发现这是因为在内容分发过程中,带宽消耗占整个运营成本的很大一部分。其次,随着我们扩大成本系数 α 和 β,成本会增加。

与仅使用云的策略相比,STAR FRONT 以更高的内容分发费用为代价实现了更低的延迟 。虽然整合卫星以在全球范围内分发内容可能更昂贵,但成本可能并非不可接受,并且对于具有非常严格延迟要求的高优先级区域或任务而言,应该是值得的。