Distributed Hybrid Ground Station Architecture¶
We propose a new distributed hybrid ground station architecture, L2D2, for Earth observation satellites. An overview of L2D2 is in Fig. 2. L2D2 consists of multiple ground stations spread across the globe. Each of these ground stations is connected to the Internet and communicates with a centralized scheduler. L2D2 ground stations have three distinctive characteristics:
• Geographically Distributed: L2D2 ground stations are spread across the globe, either maintained by independent individuals, volunteers, or corporations. This geographic distribution of ground stations has two advantages. First, it enables satellites to follow a dynamic downlink schedule. If the link from satellite 𝛼 to ground station 𝑖 is expected to encounter clouds, then it could downlink data at a different ground station 𝑗 that falls along its path. Second, the geographic distribution reduces latency in the data downlink process. This allows the download plan to be cognizant of the latency-sensitivity of the data. For instance, in latency-sensitive applications like monitoring forest fires and floods, the sensitive data can be downlinked in tens of minutes in a geographically distributed network but will take hours to days in a centralized architecture.
• Hybrid: As noted in Sec. 2, the data communication for Earth observation satellites is primarily downlink. Moreover, enabling uplink on a ground station requires following complex licensing requirements [34] that are both expensive and time-consuming. In L2D2, we allow for a majority of the nodes to be receive-only, i.e. they do not transmit any data. This is an important design choice for making the system scalable. At the same time, this design choice opens up a lot of interesting systems problems that we discuss below.
• Low-complexity: For the system to be deployed at scale, we require the cost and complexity of individual ground stations to be low. As such, the individual ground stations in L2D2 do not have high gain, specialized equipment, but rely on commodity hardware that is easily deployable on rooftops and backyards. This decreases the capacity of individual links by an order of magnitude or more, but the reduced capacity is compensated through geographic diversity.
Overview: In L2D2, a scheduler estimates the trajectory of a satellite for a fixed future time-interval (e.g. one day). Then, it estimates the link quality between all satellite-ground station pairs using the link quality estimation method in Sec. 3.2. It, then, identifies an optimal match between satellites and ground stations at each time instant (Sec. 3.1). This schedule is distributed to all the ground stations over the Internet. The downlink schedule for each satellite is also uploaded to individual satellites when they come in contact with a transmit-capable ground station (4-5 in our network). Then, during their path, the satellites follow the planned schedule and downlink data to receive-only ground stations, which follow the shared schedule as well and point to the corresponding satellite. This data is then collated at the back-end and any missing pieces can be communicated to the satellite during the next contact with a transmit-capable station (Sec. 3.3).
我们为对地观测卫星提出了一种名为 L2D2 的新型分布式混合地面站架构。L2D2 的概览如图2所示。L2D2 由分布在全球各地的多个地面站组成。这些地面站均连接至互联网,并与一个中心化调度器进行通信。L2D2 地面站具有三个显著特点:
• 地理分布式 (Geographically Distributed): L2D2 地面站分布于全球各地,可由独立的个人、志愿者或公司进行维护。地面站的这种地理分布具有两大优势。首先,它使卫星能够遵循动态的下行链路调度。如果卫星 α 到地面站 i 的链路预计会受到云层干扰,那么它可以在其路径上的另一个地面站 j 进行数据下行。其次,地理分布降低了数据下行过程中的延迟。这使得下载计划能够考虑到数据的延迟敏感性。例如,在监控森林火灾和洪水等延迟敏感型应用中,敏感数据在一个地理分布式网络中可在数十分钟内完成下行,但在一个中心化架构中则需要数小时到数天。
• 混合式 (Hybrid): 如第2节所述,对地观测卫星的数据通信主要为下行链路。此外,在地面站上启用上行链路需要遵循复杂的许可要求[34],这既昂贵又耗时。在 L2D2 中,我们允许大部分节点为只接收 (receive-only) 模式,即它们不发射任何数据。这是一个旨在使系统具备可扩展性的重要设计选择。同时,这一设计选择也带来了许多有趣的系统性问题,我们将在下文进行探讨。
• 低复杂度 (Low-complexity): 为了使系统能够大规模部署,我们要求单个地面站的成本和复杂度较低。因此,L2D2 中的单个地面站不具备高增益的专用设备,而是依赖于易于部署在屋顶和后院的商用硬件 (commodity hardware)。这使得单条链路的容量降低了一个数量级甚至更多,但容量的降低通过地理分集 (geographic diversity) 得到了补偿。
Downlink Scheduling¶
In this section, we formalize the problem of scheduling the satelliteground station downlink and provide a mechanism to identify the right downlink schedule. Before we delve deeper into L2D2’s downlink scheduler, we note that scheduling downlink for a distributed hybrid architecture like L2D2 is fundamentally different from scheduling for a centralized architecture with a small number of ground stations and satellites. In centralized architectures, it is rare for multiple satellites to compete for a ground station’s time. This is because the number of satellites is small (typically only from a single provider) and each satellite-ground station contact lasts just ten minutes. The goal of L2D2 is to serve as a ground station fabric spread across the planet that can downlink data from satellites from multiple providers. At such scale, conflicts become the norm. In our dataset (Sec. 4), we see up to 100 satellites (median 14) competing for a ground station. More importantly, innovative scheduling benefits the robustness of our architecture. When a satellite-ground station link is going to be weakened by hardware constraints, multipath effects, or bad weather, we can predict it in advance and schedule the satellite to downlink at a different ground station.
Intuitively, we aim to find a scheduling algorithm that can optimize a pre-defined value function across time and across all satelliteground station pairs. As we formalize this problem below, we face two high-level challenges. First, for a satellite, switching between ground stations is not instantaneous. They may have to steer their antenna either mechanically or electronically and incur cost in terms of delays, lost throughput, etc. Second, the problem of identifying the best schedule across space-time with switching delays is known to be NP-hard. We present an algorithm that tackles these challenges below.
在本节中,我们将对“卫星-地面站”下行链路的调度问题进行形式化建模,并提供一种机制来确定合适的下行链路调度方案。在深入探讨 L2D2 的下行链路调度器之前,我们首先指出,为像 L2D2 这样的分布式混合架构进行下行链路调度,与为拥有少量地面站和卫星的中心化架构进行调度存在根本性的不同。 在中心化架构中,多颗卫星竞争同一个地面站资源的情况很少见。这是因为卫星数量少(通常仅来自单一供应商),且每次“卫星-地面站”的接触时长仅为十分钟左右。 L2D2 的目标是构建一个遍布全球的地面站网络,能够为来自多个供应商的卫星下行数据。在如此大的规模下,冲突成为常态。 在我们的数据集(第4节)中,我们观察到最多有100颗卫星(中位数为14颗)竞争同一个地面站。更重要的是,创新的调度机制有助于提升我们架构的鲁棒性 (robustness)。当某条“卫星-地面站”链路因硬件限制、多径效应或恶劣天气而即将减弱时,我们可以提前预测并调度该卫星在另一个不同的地面站进行数据下行。
直观地说,我们的目标是找到一种调度算法,该算法能够针对所有“卫星-地面站”对,在整个时间范围内优化一个预先定义的价值函数 (value function)。在下文对该问题进行形式化定义时,我们面临两大高级别挑战。首先,对卫星而言,在地面站之间的切换并非瞬时完成的。卫星可能需要通过机械或电子方式调整其天线方向,并因此产生延迟、吞吐量损失等成本。其次,在考虑切换延迟的情况下,于时空维度上寻找最优调度方案的问题,已被证明是 NP难 (NP-hard) 问题。下文中,我们提出一种应对这些挑战的算法。
又臭又长,给ai读吧...
核心内容概括
本节旨在解决大规模分布式地面站网络(如L2D2)中的卫星下行链路调度问题。其核心思想是 在每个时间步长上,为卫星和地面站找到一个最优的匹配方案 ,以最大化系统在一段时间内的总“价值”。为了实现这一目标,作者提出了一个考虑了“切换成本”的贪心调度算法,并将其转化为一个可以用标准算法高效求解的图论问题。
以下是详细的分步解析:
1. 问题建模与价值定义 (Problem Formulation)
首先,文章对调度问题进行了形式化定义:
- 基本元素:系统包含一个卫星集合 \(S = \{s_1, ..., s_M\}\) 和一个地面站集合 \(G = \{g_1, ..., g_N\}\)。每个卫星的状态由其轨道数据(TLE)定义。
- 价值函数 (Value Function):定义了一个核心的价值函数 \(\phi(x, t)\),它表示在时间 \(t\) 传输数据 \(x\) 所能产生的价值。这个函数非常灵活,可以根据不同目标进行定义,例如:
- 最大化吞吐量。
- 最小化延迟(对延迟敏感的数据给予更高价值)。
- 满足服务等级协议(SLA)或响应客户的竞价。
- 价值矩阵 (Value Matrix) \(\Phi^t\):在任意时间点 \(t\),系统会构建一个价值矩阵 \(\Phi^t\)。
- 轨道计算:利用最新的TLE数据预测卫星轨迹,计算出每个“卫星-地面站”对的可见性、仰角和方位角。
- 速率预测:根据上述参数估算出卫星 \(s_i\) 与地面站 \(g_j\) 之间的预期数据传输速率 \(D_{ij}^t\)。
- 价值计算:矩阵中的每个元素 \(\Phi_{ij}^t\) 代表了在该时刻卫星 \(s_i\) 连接地面站 \(g_j\) 的价值,其计算方式为 \(\Phi_{ij}^t = \phi(\min(X_i^t, D_{ij}^t), t)\),其中 \(X_i^t\) 是卫星 \(s_i\) 当时持有的数据量。这确保了价值是基于实际可传输的数据量来计算的。
2. 调度算法与切换成本 (Scheduling Algorithm & Switching Cost)
在定义了价值矩阵后,调度的目标是在时间序列 \(T\) 上找到一系列的匹配方案,最大化总价值。
- 优化目标:调度问题被定义为寻找一系列的排列矩阵 (Permutation Matrix) \(P^t\) (其中 \(t=1, ..., T\)),以最大化总价值 \(\sum_t Tr(P^t \Phi^t)\)。其中 \(P^t\) 是一个二元矩阵,代表了在时间 \(t\) 的匹配方案,\(Tr(\cdot)\) 表示矩阵的迹(对角线元素之和),这个乘积的迹就等于当前匹配方案的总价值。
- 切换惩罚 (Switching Penalty):文章指出了一个关键的实际问题:卫星在不同地面站之间切换(需要转动天线)会产生时间延迟和吞吐量损失,这是一种“切换成本”。如果忽略这个成本,算法可能会让卫星频繁切换,不符合实际。
- 引入奖励机制:为了解决此问题,算法引入了一个奖励矩阵 (Bonus Matrix) \(B^t\)。这个矩阵的作用是激励卫星保持当前的连接,而不是随意切换。
- \(B^t\) 是基于上一个时刻的匹配矩阵 \(P^{t-1}\) 和一个奖励因子 \(b\) 生成的 (\(B^t \propto b \times P^{t-1}\))。
- 当一个连接的价值很高时,保持这个连接的奖励也相应更高。这种机制使得好的连接更具有“粘性”。
- 贪心算法 (Greedy Algorithm):最终, 调度算法在一个时间步长 \(t\) 的目标,从单纯最大化 \(Tr(P^t \Phi^t)\) 变成了最大化 \(Tr(P^t (\Phi^t + B^t))\)。 这是一个贪心策略:在每个时间点,都做出一个同时考虑当前价值和保持上一时刻连接的奖励的最优决策。
3. 问题的求解方法 (Solution Method)
最后,文章阐述了如何高效地求解每个时间点的最优匹配问题 \(P^t = \arg\max_{P^t} Tr(P^t (\Phi^t + B^t))\)。
- 问题转化:这个问题可以被精确地映射为一个经典的图论问题——最大权重二分图匹配 (Maximum Bipartite Graph Matching)。
- 可以将卫星集合和地面站集合看作二分图的两边。
- 连接卫星 \(s_i\) 和地面站 \(g_j\) 的边的权重就是矩阵 \((\Phi^t + B^t)\) 中对应的元素值。
- 寻找最优匹配矩阵 \(P^t\) 就等价于在这个二分图中寻找一个总权重最大的匹配。
- 采用匈牙利算法 (Hungarian Algorithm): L2D2系统使用成熟且高效的匈牙利算法来解决这个最大权重二分图匹配问题。 通过在每个时间步长上运行该算法,可以快速地为所有卫星和地面站找到当前时刻下的最优匹配方案 \(P^t\)。
通过按时间顺序迭代执行上述步骤,L2D2系统就能生成一个完整的、考虑了切换成本的、高效的卫星下行链路调度方案。
Rate Adaptation¶
For a typical wireless link, the ideal data rate for data transmission is estimated either using direct feedback from the receiver or through indirect feedback e.g. acknowledgments. For example, in the state-of-the-art radio design for cubesats [19], the radios use DVB-S2 protocol for communication and use adaptive coding and modulation to adapt to different satellite-ground station link conditions. The satellite radio selects the best modulation and coding configuration based on feedback received from the ground station.
In L2D2, a majority of our ground stations are receive-only. As such, we cannot rely on feedback from the ground station to identify the right data rate to downlink data. This is crucial because if the satellite chooses too low a data rate, it would waste opportunities to downlink more data. Conversely, picking a data rate that is higher than what the link can support would lead to high packet loss. Furthermore, we need to identify the amount of data that will be transmitted during a contact before the communication begins to identify the optimal satellite-ground station links using Algorithm 1.
对于 典型的无线链路,理想的数据传输速率通常是通过接收端的直接反馈或间接反馈(例如,确认信号)来估计的。 以当前最先进的立方星(cubesat)无线电设计[19]为例,其无线通信采用 DVB-S2 协议,并利用自适应编码与调制 (adaptive coding and modulation) 技术来适应不同的“卫星-地面站”链路条件。卫星无线电系统会根据从地面站收到的反馈来选择最佳的调制与编码方案。
在 L2D2 架构中,我们绝大多数的地面站都是只接收 (receive-only) 的。因此,我们无法依赖地面站的反馈来确定合适的下行数据速率。 这一点至关重要,因为如果卫星选择了过低的数据速率,将会浪费下行更多数据的机会;反之,如果选择的速率高于链路所能支持的上限,则会导致严重的数据包丢失。此外,我们需要在通信开始前就确定一次联络期间能够传输的数据总量,以便使用算法1来识别最优的“卫星-地面站”链路。
To solve this challenge, we propose a new rate adaptation algorithm that does not rely on observed link quality but uses predicted link quality. We build a new prediction framework that can estimate the signal strength and signal-to-noise ratio for a signal received at the ground station, and use it to identify the ideal rate for data transmission. Intuitively, the loss in signal strength as the signal travels from a satellite to the Earth happens because of three reasons: (a) Propagation loss: This increases with distance and signal frequency (logarithmic). This loss increases as the elevation of the satellite decreases. Lower elevation implies that the satellite is closer to the horizon and as a result, the signal has to travel a longer distance. (b) Weather-related Loss: Clouds, rain, and snow induce additional attenuation for the signal. This can vary from zero to 10 dB depending on the weather and signal frequency [6]. Higher frequencies experience this effect more severely. (c) Hardware effects: The design of the antennas and circuits in the transmitter and receiver introduces a link-specific loss.
为应对此挑战,我们提出了一种 新的速率自适应算法,该算法不依赖于观测到的链路质量,而是使用预测的链路质量。 我们构建了一个新的预测框架,该框架能够估计地面站接收到的信号强度和信噪比,并利用它来确定理想的数据传输速率。从直观上看,信号从卫星传输到地球过程中的强度衰减主要源于三个原因:
(a) 传播损耗 (Propagation loss): 这种损耗随距离和信号频率的增加而增加(呈对数关系)。当卫星的仰角降低时,此损耗会增大,因为低仰角意味着卫星更靠近地平线,信号需要传播更长的距离
(b) 天气相关损耗 (Weather-related Loss): 云、雨、雪会给信号带来额外的衰减。根据天气状况和信号频率,这种损耗可能从0到10 dB不等[6]。频率越高的信号受此影响越严重
(c) 硬件效应 (Hardware effects): 发射器和接收器中天线与电路的设计会引入与特定链路相关的损耗
At a high level, the propagation loss can be estimated using orbit calculations and the hardware effects can be calibrated for. There are also standard models for understanding weather-related loss [31–33]. Therefore, in principle, a combination of these will help predict the link quality accurately. However, as we demonstrate in Sec. 6, this combination proves to be empirically inaccurate. Our analysis of real measurements reveals that the surroundings of the satellite and ground stations play an important role in link quality. Specifically, the presence of reflectors (like solar panels) on satellites, their relative positions, and their motion stability impacts the signal strength at the receiver. Similarly, the presence of buildings, trees, etc. in the proximity of the ground station causes obstacles or reflections on the receiver’s end.
Since this behavior varies with the satellite and ground station, we take a data-driven approach to estimating link quality. Specifically, we develop a machine learning model for each satellite-ground station link that can learn the quirks of each link and its temporal variation. Our model uses (a) satellite orbital position relative to the ground station and (b) weather conditions at the ground station to predict the signal strength of the link. L2D2’s choice to use machine learning is motivated by two factors: (a) the reflectors and obstacles near satellites vary per satellite and are hard to model without observed data, and (b) signal strength data is easily available at ground stations. Therefore, these models can be easily trained using data measured at the ground station during the first few days after setup.
从宏观层面看,传播损耗可以通过轨道计算来估计,硬件效应可以进行校准,同时也有标准的模型可以用于理解天气相关的损耗[31-33]。因此,理论上,将这些模型结合起来将有助于准确预测链路质量。然而,正如我们在第6节中所展示的,这种组合在经验上被证明是不准确的。我们对真实测量数据的分析表明,卫星和地面站的周边环境在链路质量中扮演着重要角色。具体而言,卫星上反射体(如太阳能电池板)的存在、其相对位置以及运动稳定性都会影响接收端的信号强度。同样,地面站附近建筑物、树木等的存在也会在接收端造成信号遮挡或反射。
由于这种行为因卫星和地面站的不同而异,我们采用一种数据驱动的方法来估计链路质量。具体来说,我们 为每个“卫星-地面站”链路开发一个机器学习模型 ,该模型能够学习每条链路的独有特性及其随时间的变化。
我们的模型使用 (a) 卫星相对于地面站的轨道位置和 (b) 地面站的天气状况来预测链路的信号强度。
L2D2 选择使用机器学习主要基于两个因素:(a) 卫星附近的反射体和障碍物因星而异,在没有观测数据的情况下难以建模;(b) 信号强度数据在地面站很容易获得。
因此,这些模型可以在地面站设置完成后的最初几天内,使用测得的数据轻松完成训练。
Model Architecture: L2D2 uses an ensemble model to predict signal strength. Ensemble models are known to be highly performant because they combine multiple model architectures that can extract varied insights. Specifically, L2D2 uses two model architectures: (a) gradient boosted regression trees and (b) deep learning-based regression. The tree ensemble fits on five folds of the input feature matrix. We use 1000 trees as weak estimators with a maximum tree depth of 8. We use the XGBoost implementation [12]. The deep learning regression model has 4 layers. Each layer uses rectified linear unit (ReLu) activation functions to incorporate non-linearity. We stack the output of these two models and feeds into a final linear regression model that outputs the signal strength estimate. Fig. 4 illustrates the design.
L2D2 使用一个集成模型 (ensemble model) 来预测信号强度。集成模型因其能结合多种模型架构、提取不同维度的信息而表现出色。具体而言,L2D2 使用两种模型架构:(a) 梯度提升回归树 (gradient boosted regression trees) 和 (b) 基于深度学习的回归模型。树集成模型在输入特征矩阵的五折交叉验证上进行拟合。我们使用1000棵树作为弱估计器,最大树深为8,并采用了 XGBoost 的实现[12]。深度学习回归模型包含4个层,每层都使用修正线性单元(ReLu)激活函数来引入非线性。我们将这两种模型的输出堆叠起来,并输入到一个最终的线性回归模型中,由它输出信号强度的估计值。图4展示了此设计。
Input Features: L2D2 uses features that describe (a) satellite orbital positions: elevation and azimuth angle of the satellite with respect to the ground station, and (b) weather: precipitation intensity. As expected, due to their high frequencies, Ka-band satellites are more sensitive to weather. Therefore, we augment the dataset with cloud cover and precipitation probability in addition to precipitation intensity. X-band satellites are not as susceptible to atmospheric variation so we do not include cloud cover and precipitation probability for them. The models can be trained to either output signal strength or the signal to noise ratio. The output is measured on the log scale, in dB or dBm, as is the norm in wireless systems.
输入特征 (Input Features)
L2D2 使用的特征描述了 (a) 卫星轨道位置:卫星相对于地面站的仰角和方位角;以及 (b) 天气:降水强度。正如预期的那样,由于频率较高,Ka波段的卫星对天气更为敏感。因此,除了降水强度外,我们还为其数据集增加了云量和降水概率这两个特征。X波段的卫星不那么容易受到大气变化的影响,因此我们没有为它们包含云量和降水概率。这些模型可以被训练来输出信号强度或信噪比。输出值以对数尺度(dB 或 dBm)来衡量,这也是无线通信系统中的标准做法。
Loss Function: We train the models using the mean squared error loss. However, if we use the output of the model directly to estimate the data rate, it is expected to make roughly equal errors in terms of over-estimating and under-estimating the signal strength. In reality, over-estimation errors for data rate lead to packet losses (100% loss) while under-estimation errors just lead to a partial data rate loss. Therefore, we learn another buffer parameter, 𝜖, from the data. We subtract 𝜖 from the model output and choose 𝜖 (during training) such that the data rate prediction error is minimized. This value (typically 1-2 dB) helps us reduce the data rate loss even further. Finally, note that we do not train the regression model to directly predict the data rate because it is a discrete parameter and leads to a non-differentiable loss function.
损失函数 (Loss Function)
我们使用均方误差 (mean squared error) 损失来训练模型。然而,如果直接使用模型的输出去估计数据速率,那么在过高估计和过低估计信号强度方面的错误概率大致相等。在实际应用中,对数据速率的过高估计会导致数据包丢失(100%损失),而过低估计仅仅导致部分数据速率的损失。因此,我们从数据中额外学习一个缓冲参数 ϵ。我们将 ϵ 从模型输出中减去,并在训练期间选择合适的 ϵ 值,以最小化数据速率的预测误差。这个值(通常为 1-2 dB)帮助我们进一步减少数据速率损失。最后需要注意的是,我们不直接训练回归模型来预测数据速率,因为数据速率是一个离散参数,这会导致损失函数不可微。
Induction of New Ground Stations: In L2D2, we expect new ground stations to join the network frequently. Therefore, it would be inefficient if we wait to train a new model for each satellite before we can start using this ground station effectively. Therefore, we devise a transfer learning strategy that uses models learnt at other ground stations for a given satellite to infer a model for a new ground station, thereby reducing the startup time for a new ground station.
We describe several modifications to Fig. 4 for transfer learning in order to estimate the link between a newly added ground station 𝑔 𝑖 and an existing satellite 𝑠 𝑘 . Let us assume that we already have models trained for satellite 𝑠 𝑘 and ground station 𝑔 𝑗 . We freeze the gradient boosted tree ensemble and the top three layers of the deep learning regression framework, based on the pre-trained model (𝑠 𝑘 − 𝑔 𝑗 ). Then, we use a small number of measurements on a new ground station, 𝑔 𝑖 , to re-train the last layer of the deep learning regressor and the linear regression model shown in Fig. 4. This method allows us to encode the generic features of the satellite 𝑠 𝑘 in the frozen layers that are transferred from the other ground station, and helps us quickly get to the ground station specific features. In doing so, we transfer our model from one ground station to another and reduce the induction time for a new ground station.
新地面站的引入 (Induction of New Ground Stations)
在 L2D2 中,我们预期会有新的地面站频繁加入网络。因此,如果每个新站都必须等到为每颗卫星训练一个全新模型后才能有效使用,那将非常低效。 为此,我们设计了一种 迁移学习 (transfer learning) 策略,它利用在其他地面站为给定卫星学习到的模型,来推断一个适用于新地面站的模型 ,从而减少新地面站的启动时间。
为了估计一个新加入的地面站 gi 和一个已有的卫星 sk 之间的链路,我们对图4的架构进行了几处修改以应用迁移学习。假设我们已经拥有为卫星 sk 和地面站 gj 训练好的模型。我们基于这个预训练模型 (sk−gj),冻结 (freeze) 梯度提升树集成模型以及深度学习回归框架的顶层三个层。然后,我们使用在新地面站 gi 上获得的少量测量数据,来重新训练 (re-train) 深度学习回归器的最后一层以及图4中所示的线性回归模型。这种方法使我们能够将在其他地面站学到的卫星 sk 的通用特征编码在被冻结的层中,并帮助我们快速学习到特定于新地面站的特征。通过这种方式,我们将模型从一个地面站迁移到另一个,缩短了新地面站的引入时间。
Satellite Feedback¶
Since a majority of L2D2 ground stations are receive-only, they cannot transmit acknowledgments (transport-layer or applicationlayer) back to the satellite. This has two implications: (a) the satellite cannot re-transmit data when packets get lost, e.g., due to errors in link estimation, (b) the satellite cannot remove data from its storage. This will, naturally, be untenable as cubesats have limited storage capacity. Therefore, we must identify a mechanism to communicate this information with the satellite.
To achieve this objective, we deploy a delayed-relayed acknowledgment mechanism. Specifically, every L2D2 ground station checks its received imagery and creates a bitmap for the received data. In this bitmap, each bit denotes if a packet was received successfully. This bitmap is then broadcast to all transmit-capable stations (typically three to five stations) over the Internet. Note that, these bitmaps are small and can be transferred at low latencies over the Internet. This bitmap is then conveyed to the satellite when it comes in contact with a transmit-capable station. This can be few minutes to few hours after the actual packet transmission. On receiving the acknowledgment, a satellite deletes the data that was successfully received at the ground station and places the unacknowledged data back in its transmit queue. Note that, with centralized architectures, the satellites have to store data for a few orbits anyway, so our approach does not increase the storage requirement on a satellite (see Sec. 6). In addition, L2D2 ground stations receive the data from satellites at a lower latency than traditional architectures.
由于 L2D2 的绝大多数地面站都是只接收模式,它们无法将确认信号(无论是传输层还是应用层)发送回卫星。这带来了两个问题:
(a) 当数据包因链路估计错误等原因丢失时,卫星无法进行数据重传
(b) 卫星无法从其存储中删除已成功下行的数据 (因为没有被ack, 不敢贸然删去)
由于立方星的存储容量有限,这种情况自然是不可持续的。因此,我们必须设计一种机制来将这些信息传达给卫星。
为了实现这一目标,我们部署了一种 延迟-中继确认机制 (delayed-relayed acknowledgment mechanism) 。
具体来说, 每个 L2D2 地面站会检查其接收到的图像,并为接收到的数据创建一个位图 (bitmap) 。在此位图中,每一位都表示一个数据包是否被成功接收。然后,这个 位图通过互联网广播给所有具备发射能力的地面站(通常为三到五个)。
值得注意的是,这些位图文件很小,可以通过互联网以低延迟传输。 当卫星与一个具备发射能力的地面站建立联系时,该位图便会被传送给卫星。
这个过程可能在实际数据包传输后的几分钟到几小时内发生。 在收到确认信息后,卫星会删除在地面站被成功接收的数据,并将未被确认的数据重新放回其发射队列 (transmit queue)。 需要指出的是,在中心化架构中,卫星无论如何都需要将数据存储数个轨道周期,因此我们的方法并不会增加卫星的存储需求(见第6节)。此外,与传统架构相比,L2D2 的地面站能以更低的延迟接收到来自卫星的数据。
Discussion and Open Questions¶
Satellite Transmit Power: Typical satellites transmit data at a higher power than required for successful signal transmissions. This is done to avoid small link quality fluctuations due to orbits, obstructions, weathers, etc. L2D2’s link estimation algorithm can identify such fluctuations in advance, and therefore reduce the need for extra power in transmission, resulting in power savings at the satellite.
Satellite Power Management: Transmissions from satellite radios consume significant power. A design like L2D2 requires satellites to transmit data more frequently. Therefore, future iterations of L2D2 should incorporate power budgets of the satellite radio in the scheduler design optimization.
Edge compute on the ground station: Past proposals [17] have explored edge compute on the satellite to pre-filter downlinked data. Edge compute on the satellite requires hardware upgrades and is not agnostic to the underlying application. We believe L2D2 provides a new avenue for this line of work by enabling edge compute on the ground station. Ground stations can leverage edge compute techniques to deliver latency-sensitive data to the cloud faster and upload the other data at a lower priority.
Beamforming: We assume that every ground station can connect to only one satellite at each point of time. Some modern designs of ground stations have explored beamforming at the ground station. This will be an interesting addition to L2D2 by enabling each ground station to split power between multiple satellites, thereby increasing the data downlink efficiency. A similar question arises when ground stations can leverage multiple frequencies to communicate with different satellite constellations. We leave the exploration of these new optimizations to future work.
Backward Compatibility: L2D2’s design is compatible with the DVB-S2 protocol used for data downlink. At this time, we cannot comment on compatibility with the software deployed on satellites due to lack of public documentation.
Economic and Security Implications: L2D2’s adoption hinges on appropriate economic incentives for operators to deploy ground stations and a security framework to prevent data misuse. This is an exciting direction for future research.
• 卫星发射功率 (Satellite Transmit Power):
典型的卫星通常以高于成功传输信号所需功率的水平发射数据。这样做是为了规避因轨道、障碍物、天气等因素引起的微小链路质量波动。L2D2 的链路估计算法可以提前识别此类波动,从而减少传输中对额外功率的需求,最终为卫星节省能源。
• 卫星功率管理 (Satellite Power Management):
卫星无线电的发射会消耗大量功率。像 L2D2 这样的设计要求卫星更频繁地进行数据传输。因此,L2D2 的未来迭代版本应将卫星无线电的功率预算 (power budget) 纳入调度器设计的优化目标中。
• 地面站上的边缘计算 (Edge compute on the ground station):
过去的一些提案[17]探索了在卫星上进行边缘计算,以对下行数据进行预过滤。卫星上的边缘计算需要硬件升级,并且无法做到与底层应用无关。我们相信 L2D2 通过在地面站上启用边缘计算,为这一研究方向开辟了新途径。地面站可以利用边缘计算技术,将延迟敏感的数据更快地传输到云端,而将其他数据以降序优先级上传。
• 波束成形 (Beamforming):
我们假设每个地面站在任一时刻只能连接一颗卫星。一些现代地面站设计已经探索了在地面站端进行波束成形。这将是 L2D2 一个有趣的补充,它能使每个地面站将功率分配给多颗卫星,从而提高数据下行效率。当一个地面站可以利用多个频率与不同的卫星星座通信时,也会出现类似的问题。我们将这些新优化方向的探索留给未来的工作。
• 向后兼容性 (Backward Compatibility):
L2D2 的设计与用于数据下行的 DVB-S2 协议是兼容的。目前,由于缺乏公开的文档,我们无法评论其与卫星上部署的软件的兼容性。
• 经济与安全影响 (Economic and Security Implications):
L2D2 的成功采纳取决于能否为运营商提供部署地面站的适当经济激励,以及能否建立一个防止数据滥用的安全框架。这是一个激动人心的未来研究方向。