跳转至

Related Work

The problem of downlink constraints on new constellations of LEO satellites has been tackled in both academia [17–19, 54] and industry [1, 20, 34, 40, 46]. A large part of this work focuses on eking out more performance from individual satellite-ground station links. Such a design suffers from the limitations of a centralized architecture described before. Recently, [17] proposed offloading some computation to the satellites to reduce the downlink load. For instance, in a workload that needs images of buildings, the satellites could pre-filter building images before downlink to the ground stations. However, this design runs contrary to the business model for Earth observations satellites that sell the observed data to customers, who then run the end application. In the absence of a priori knowledge of the end application, the pre-filtering on the satellite might reject important information relevant to the user. In contrast, L2D2 downlinks all the data to the ground using a hybrid ground station design.

In the industry, multiple efforts [1, 36, 40, 46] have emerged recently to rent out time on individual ground stations to satellite operators by the minute. This is a welcome trend in enabling access to new satellite operators but suffers from similar regulatory and equipment challenges as centralized architectures [27]. However, this investment opens up the possibility of new abstractions like distributed ground station architectures in the future. In L2D2, we investigate the tools that will be required for such a distributed design. VERGE [43] is perhaps the closest design to L2D2. In [43], Lockheed Martin is planning to deploy low cost S-band parabolic antennas in a geographically distributed manner. Each antenna will stream raw RF measurements to the cloud, where a softwaredefined receiver will decode this data. In contrast, L2D2 co-locates compute alongside the antenna and the decoded & processed data is sent to the cloud. This significantly reduces the backhaul capacity and cost required to support the ground station (by orders of magnitude). Furthermore, it allows for edge compute workloads that can prioritize data upload to the cloud in an efficient manner. One direct impact of this design choice is that [43] is limited to lower bandwidth S-band downloads, as opposed to X-band downloads that are common for earth observation.

The scheduling problem for satellite-ground station links has been tackled in [8, 9, 25, 55, 58]. These systems do not account for varying link quality over time and/or limit themselves to single satellite and multiple ground stations. In contrast, L2D2 presents a scheduler for multi-satellite, multi-ground station configuration while accounting for varying link qualities and switching delays.

Prior work on satellite-ground link quality estimation has mainly been carried out on simulated data that does not capture the complexities of real-world signals like reflections close to the transceiver [26]. Some research efforts incorporate real-world link quality measurements in their design exclusively with low-frequency links (UHF,S-band) [37, 49]. However, in the context of Earth observation satellite networking, L2D2’s link estimation model based on X-band data is more applicable since Earth observation satellites more commonly operate in this high frequency range and such links are more prone to weather-effects. L2D2 also outperforms prior statistical models for link quality prediction [31–33, 37]. L2D2 achieves this by using a data-driven approach that accounts for multipath effects and occlusions.

Finally, L2D2 is inspired by past work in open source ground station designs [14, 41] and deployments of these stations [15, 41, 48, 53]. These deployments have fostered research in scheduling, mission control, and other aspects of ground station design [3, 55, 56, 63]. Most of these designs are limited to low frequency, low data rate regimes for experimental satellites that transmit small amounts of data. In L2D2, we differ along three axes: distributed design framework, high frequency and high bandwidth data downloads, mix of transmit-capable and receive-only ground stations.

We note that L2D2 builds on a previous workshop paper [60] and differs along three axes: (a) new scheduling framework that accounts for switching delays, (b) new data-driven link estimation algorithm, and (c) extensive evaluation on a real-world dataset.

学术界 [17–19, 54] 和工业界 [1, 20, 34, 40, 46] 都已在着手解决新型低地球轨道(LEO)卫星星座的下行链路约束问题。其中 很大一部分工作专注于从单条“卫星-地面站”链路中提升更多性能。 这类设计受限于前述的中心化架构的弊端。 最近,[17] 提出将部分计算任务卸载至卫星,以减轻下行链路的负载。例如,在一个需要建筑图像的工作负载中,卫星可以在下行至地面站前预先过滤出建筑图像。然而,这种设计与对地观测卫星的商业模式相悖,因为它们的模式是向客户出售观测数据,再由客户运行最终的应用。在缺乏对最终应用的先验知识的情况下,卫星上的预过滤可能会丢弃掉对用户有价值的重要信息。 相比之下,L2D2 采用一种混合式地面站设计,将所有数据下行至地面。

在工业界,近期出现了多种将单个地面站的时间按分钟出租给卫星运营商的尝试 [1, 36, 40, 46]。这是一个积极的趋势,它为新的卫星运营商提供了接入渠道,但也面临着与中心化架构相似的监管和设备挑战 [27]。然而,这种投资为未来分布式地面站架构等新型抽象概念的出现开辟了可能性。在 L2D2 中,我们研究了实现这种分布式设计所需的工具。VERGE [43] 可能是与 L2D2 最接近的设计。在 [43] 中,洛克希德·马丁公司计划以地理分布式的方式部署低成本的S波段抛物面天线。每根天线将把原始的射频(RF)测量数据流式传输到云端,再由一个软件定义接收机来解码这些数据。相比之下,L2D2 将计算单元与天线部署在一起,解码和处理后的数据才被发送到云端。这极大地(达数个数量级)降低了支持地面站所需的回程链路容量和成本。此外,它还支持了能够以高效方式优先上传数据的边缘计算工作负载。这一设计选择的一个直接影响是,[43] 的方案受限于带宽较低的S波段下载,而无法支持对地观测领域更常见的X波段下载。

“卫星-地面站”链路的调度问题已在 [8, 9, 25, 55, 58] 中被研究。这些系统要么没有考虑到随时间变化的链路质量,要么将自身局限于“单卫星对多地面站”的情景。相比之下, L2D2 提出了一个适用于“多卫星、多地面站”配置的调度器,同时考虑了变化的链路质量和切换延迟。

以往关于“卫星-地面站”链路质量估计的工作主要基于仿真数据,这些数据无法捕捉真实世界信号的复杂性,例如收发器附近的信号反射 [26]。一些研究工作在其设计中确实采纳了真实世界的链路质量测量数据,但仅限于低频链路(UHF、S波段)[37, 49]。然而,在对地观测卫星网络的应用背景下,L2D2 基于X波段数据建立的链路估计模型更具适用性,因为对地观测卫星更常工作在这一高频范围,且此类链路更容易受到天气效应的影响。L2D2 的性能也优于先前的链路质量预测统计模型 [31–33, 37]。L2D2 通过采用一种数据驱动的方法,将多径效应和信号遮挡等因素考虑在内,从而实现了这一优势。

最后,L2D2 的灵感来源于过去在开源地面站设计 [14, 41] 及这些站点的部署 [15, 41, 48, 53] 方面的工作。这些部署促进了在调度、任务控制及地面站设计其他方面的研究 [3, 55, 56, 63]。这些设计大多局限于低频率、低数据速率的应用场景,适用于传输少量数据的实验性卫星。L2D2 与它们在三个方面有所不同:分布式设计框架、高频率和高带宽的数据下载,以及混合使用具备发射能力的和只接收的地面站。

我们在此说明,L2D2 是在一个之前的研讨会论文 [60] 的基础上构建的,并在三个方面有所不同:(a) 提出了一个考虑了切换延迟的新调度框架;(b) 提出了一个新的数据驱动的链路估计算法;以及 (c) 在一个真实世界的数据集上进行了广泛的评估。