4G/5G HANDOVER PREDICTION¶
In this section, we first introduce the HO prediction problem for cellular networks (LTE/5G). We then discuss the design of our system (Prognos) along with an overview of its components. As part of the evaluation, we compare the performance of Prognos against two existing approaches. Finally, we show the advantage of utilizing HO predictions for two mobile applications: 16K panoramic video-on-demand (VoD) and real-time volumetric video streaming.
Challenges and Goals¶
The design of Prognos is inspired by practical mobility management concerns. The “black-box” policy-based HO logic employed by cellular carriers (e.g., to choose a target cell for HO) depends on the carrier’s deployment strategy for a geographical area. Moreover, the HO policies can change from one geographical region to another depending on the carrier’s goal. On the other hand, we observe low temporal variation in HO policies for our collected data. In general, these insights confirm previous LTE studies [37]. Finally, the policybased HO logic is unique for each HO type and can be formulated as a sequence of measurement reports (MRs) preceding a HO. For example, [𝐴2,𝐴5, 𝐿𝑇𝐸𝐻 𝑖𝑛𝑡𝑒𝑟 ] translates to an A2 MR followed by an A5 MR that eventually triggered an LTE inter-frequency HO. The trigger of MRs, in the first place, depends on mobility configurations and signal strength values of serving and neighboring cells.
We seek to overcome these challenges and build a system that can learn such carrier-specific HO policies. More specifically, our goal is to build a light-weight, scalable, context-aware, and explainable system for HO prediction. An explainable system can help understand the “black-box” nature of HO policies and apply sanity checks during prediction process. A transferable scheme can help the system scale well by enabling us to transfer models with similar geographic properties and/or carrier’s deployment strategies. Any solution involving offline training will rely on the collected dataset to learn HO policies and may not generalize to the unseen mobility scenarios. A light-weight system avoids unnecessary overhead of real-time prediction on energy-constrained mobile devices. As the UE moves, a reactive system must respond to the changing radio environment. In addition to predicting HOs, a context-aware approach can consider factors such as radio access technology (LTE, 5G) and bands to inform applications about the possible improvement or deterioration of network conditions in future.
We realize our goal and its design principles by adopting an incremental learning scheme that extends system’s knowledge as more data arrives. Compared to offline training, our approach is more adaptive. Prognos adapts to all mobility scenarios, geographic locations, and cellular carriers. The HO logic learned by Prognos sheds light on carrier-specific HO decisions. It also facilitates sanity checks during prediction, and reduction of action space. For example, an SCGM HO prediction cannot be made when a device is using LTE. Finally, Prognos outputs a meaningful value ℎ𝑜_𝑠𝑐𝑜𝑟𝑒 for applications, which specifies the expected change in network capacity due to HO. We leverage the domain knowledge of cellular networks to design a system that predicts all HO types.
Design¶
Prognos is a holistic system for HO prediction and provides meaningful information about network fluctuations caused by HOs. The system consists of three key components (see Fig. 17 in Appendix A.4). The report predictor module considers mobility configurations and signal strength qualities to predict MRs. The decision learner module learns the carrier-specific HO decision logic by leveraging ideas from sequential pattern mining. Finally, the handover predictor module uses the sequence of predicted MRs and learned HO logic to forecast the HO type.
Measurement Report Prediction. Using MRs after they have been triggered only leaves a few milliseconds – 70 ms in the median case – for the application to take any decision proactively. Therefore, report predictor helps predict the HOs earlier while leaving enough time for applications to minimize QoE degradation during HO. To decide if a measurement event will be triggered and reported to the serving cell, we observe three factors: (i) configurations (threshold, time-to-trigger (𝑇𝑇𝑇) etc.) received from the serving cell for a measurement event, (ii) predicted RRS of serving cell, and (iii) predicted RRS of neighbor cell. To predict the RRS of serving and neighbor cells in next prediction window, the RRS values in the last history window are fed into a linear regression model, which is lightweight. A triangular kernel-based method [46] is used for signal smoothing in order to eliminate the variations caused by small scale fading and measurement noise. Based on the configurations received from the serving cell and predicted RRS, we forecast if the triggering condition 4 of an event will be satisfied in next prediction window or not. If a triggering condition is met for 𝑇𝑇𝑇 amount of time, the report predictor module sends this prediction to the handover predictor module.
Policy-based Handover Logic Learning. The decision learner learns the up-to-date HO logic employed by the carrier. The input for decision learner module is a continuous stream of MRs and HO commands delivered on the RRC layer. We split the input stream into phases — each 𝑝 ℎ𝑎𝑠𝑒 consists of MR(s) followed by a HO command. In Prognos, we call the learned decision logic a pattern which is a unique sequence of MRs repeatedly triggering a specific type of HO. The goal of the HO decision learning algorithm is to learn up-to-date patterns for each HO type. This sequence-based formulation of HO decision logic takes motivation from sequential pattern mining [33]. We make modifications to prefixSpan algorithm [58] making it learn patterns in an online fashion. At the end of each phase, the online learning algorithm may decide to take one of the following two actions; (i) increment the support count 5 of a pattern if an old sequence is observed or (ii) add a pattern if a new sequence is encountered. The algorithm evicts old patterns according to a freshness threshold as well. Here, freshness simply means how recent a pattern was. The eviction process also makes sure that the number of learned HO patterns do not grow excessively. Finally, the 𝑝 ℎ𝑎𝑠𝑒 𝑐𝑜𝑢𝑛𝑡 is incremented, and we wait for a new HO to process the next 𝑝 ℎ𝑎𝑠𝑒.
Handover Prediction. To predict the HO, we consider the sequence of predicted MRs received so far in the current 𝑝 ℎ𝑎𝑠𝑒. This predicted sequence is matched against all the learned HO patterns sent by decision learner. If no pattern is found among the candidates, a “no HO” prediction is made by the handover predictor. Otherwise, the HO type is predicted based on the pattern which has the highest similarity. The similarity of a pattern is a function of its support count, length and freshness. Finally, based on the predicted HO type and current radio technology, Prognos generates a ℎ𝑜_𝑠𝑐𝑜𝑟𝑒 ∈ (0, ∞). This value represents expected improvement or degradation in throughput (e.g., ℎ𝑜_𝑠𝑐𝑜𝑟𝑒=0.4 indicates 60% degradation in throughput, while a score of 1 indicates no HO or no degradation). It is empirically calculated from results reported in Fig. 16. Specifically, we calculate the median change in network capacity using the ratio of throughput before and after HO. Most of time, ℎ𝑜_𝑠𝑐𝑜𝑟𝑒 is 1, representing “no HO”, thus no expected change in throughput due to HO.
Performance Evaluation¶
We evaluate Prognos using trace-driven emulation. We collect logs from operational cellular networks using the methodology outlined previously (§3) and replay the traces.
Dataset. We collect two datasets. D1 consists of 7× traces representing a 35-min. walking loop of a tourist area. D2 is collected by walking a 25 mins loop 10× in the city’s downtown area. Both datasets are collected for OpX logged @ 20 Hz. The major difference between the two is that D1 only has 5G mmWave and LTE Mid-Band coverage while D2 has 5G Low-Band coverage as well. They also represent two different U.S. cities. We observe a total of over 320 and 840 HOs in D1 and D2, respectively. The data has imbalanced classes (i.e., HOs only cover 0.4% of the total data points). We therefore evaluate the performance on metrics oblivious to class imbalance such as F1-Score, precision, and recall.
Comparative Approaches. We compare Prognos with two recent 5G HO prediction techniques: 1) a Gradient Boosting Classifier (GBC) method used by Mei et al. [49] which uses lower layer information such as signal strength qualities of serving and neighboring cells for HO prediction and 2) a stacked long-short-term memory (LSTM) model [57] that predicts HOs by utilizing the location information of mobile device. Unlike these approaches, Prognos does not involve any offline training. Unless otherwise noted, we used 60% of our corpus as the training set for both these approaches; we used the remaining 40% as a test set for all prediction methods. In totality, our test set comprises of over 3.5+ hours of cellular traces. To report the results, we choose a prediction and history window of 1s for all approaches.
Results. As mentioned in §7.2, the report predictor module enables us to predict the HO before a MR has been raised. On average, it allows us to predict HOs 931 ms earlier with a slight 1.2% loss in accuracy (see Fig. 18 in Appendix A.4). Table 3 compares the performance of Prognos with other approaches on D1 and D2. Although the GBC and stacked LSTM models can achieve high accuracy sometimes, their F1-Score is low highlighting the inefficacy of “blind” machine learning techniques to produce reliable HO predictions. On the other hand, our system performs well on all metrics without any training. Our system achieves higher performance by decoupling the HO prediction task into two phases: (i) MR inference and (ii) carrier-specific HO decision logic. We find this decoupling not just helps increase our confidence in building the model but more importantly also helps improve accuracy by reducing model complexity. Additionally, our system scales well as it not only learns new HO patterns, but also removes the old (not recently observed) ones. For our datasets D1 and D2, new HO patterns are learned at a rate of 9.1 ±2.3 per hour, while old HO patterns are evicted @ 8.3 ±3.1 per hour. The eviction process makes sure that the number of learned patterns do not grow excessively, and prediction accuracy remains stable.
Prognos Use Cases¶
We demonstrate the usability of Prognos by considering two resourcedemanding applications (16K panoramic VoD and real-time volumetric video streaming). We make minor tweaks to their rate adaptation algorithms to use HO prediction.
Trace Collection. We collect bandwidth traces by saturating the downlink channel of a mobile device while driving. We feed these traces into Mahimahi network emulation tool [55]. Concurrently, we use XCAL to collect cellular logs i.e., RRS values, MRs and HO commands etc. We post-process the collected logs to generate 40+ traces (each spanning 240 seconds) using a sliding window across the data. All traces are collected for OpX and include 5G (Low-Band and mmWave) and LTE (Mid-Band) coverage. To avoid situations where quality level selection is trivial, we only consider traces with an average bandwidth less than 400 Mbps (and minimum bandwidth above 2 Mbps) following the approach used by Mao et al. [48]. Experimental Setup. For 16K panoramic VoD, our evaluation uses a custom 16K panoramic video encoded with H.264/MPEG-4 at 6 quality levels (720p, 1080p, 2K, 4K, 8K, 16K). Additionally, the video is divided into 60 chunks and has a total length of 120 seconds. We extend the setup outlined by Pensieve [48] to leverage HO prediction information delivered by Prognos. Real-time volumetric video streaming, on the other hand, makes use of ViVO system described earlier in §3. We disable ViVo’s visibility-aware optimizations for a fair comparison and modify its codebase to make it operable with our trace-driven emulation. A 3-min volumetric video compressed with Draco [19] is encoded at 5 point-cloud density levels (corresponding to bitrates in {43, 77, 110, 140, 170} Mbps).
Modified Rate Adaption Algorithm. For both applications, we correct the throughput prediction generated by their rate adaption algorithms. Specifically, we scale up or down the predicted throughput by multiplying it with the ℎ𝑜_𝑠𝑐𝑜𝑟𝑒 received from Prognos. Our system only intervenes when a HO is expected; we do not change anything in “no HO” situations. For evaluation, we modify 2-3 rate adaptation algorithms for each application. The same approach can be applied to any rate adaptation scheme.
Next, we demonstrate how HO-aware rate adaptation can improve the QoE of both applications. We evaluate three type of algorithms: (i) original rate adaption algorithms such as fastMPC (ii) algorithms that use ground-truth HO prediction such as fastMPC-GT, and (iii) algorithms that use HO predictions generated by Prognos (e.g., fastMPC-PR). The main purpose here is to show the effectiveness of our system; we do not compare the performance of rate adaption schemes.
• 16K Panoramic VoD. Fig. 14a and 14b compare the performance of ABR algorithms to the HO prediction-enhanced versions of rate-based (RB), fastMPC and robustMPC [48, 67]. There are three key takeaways from these results. First, the throughput prediction accuracy of the original ABR schemes degrades by an average 37.14-43.22% during HOs. Fig. 14b shows the mean average error in throughput prediction for fastMPC. Second, Prognos can improve throughput prediction during HOs by 52.42-61.29% depending on the ABR scheme (Fig. 14b). Finally, we find that our system can boost the QoE for all the ABR schemes and mobility traces. As shown in Fig. 14a, Prognos reduces stall by 34.6%-58.6% and increases the video quality by 1.72% on average. In absolute terms, the QoE is within 0.05-0.10% of the ground-truth for stall and 0.60%-0.99% for video quality.
• Real-time Volumetric Video Streaming. We evaluate the performance of ViVo [40] and FESTIVE [41] against the modified algorithms that use HO prediction. In Fig. 14c, we only plot the improvement brought by HO-aware (ground-truth and Prognos) rate adaptation algorithms when compared to the original rate adaptation algorithms. The improvement is shown for two metrics: video bitrate quality and stall time. Fig. 14c indicates that Prognos improves video quality by 15.1%-36.2% while also reducing stall time by 0.24%-3.67%. The QoE improvement, in absolute terms, is within 0.01%-0.25% of the ground-truth for stall time and 0.39%-2.49% for video quality.
In summary, the evaluation shows the effectiveness of our system in improving the QoE for two applications with different workloads. Additionally, we employ the same technique to improve throughput prediction for both applications.
4G/5G 切换预测¶
引言¶
在本节中,我们首先介绍了蜂窝网络(LTE/5G)中的切换(HO)预测问题。然后,我们讨论了我们的系统(Prognos)的设计及其组件概述。在评估中,我们将Prognos与两种现有方法进行比较。最后,我们展示了利用切换预测对两个移动应用的优势:16K全景视频点播(VoD)和实时体积视频流媒体。
挑战与目标¶
Prognos的设计受实际移动管理问题的启发。蜂窝运营商使用的“黑盒”基于策略的切换逻辑(例如,选择切换目标单元格)取决于地理区域的部署策略。此外,切换策略可能因运营商的目标而在不同地理区域之间变化。然而,我们观察到我们的数据中切换策略的时间变化较小。这与之前的LTE研究一致。一般来说,基于策略的切换逻辑对于每种切换类型都是独一无二的,可以用一系列在切换前触发的测量报告(MRs)来表达。例如,[A2, A5, LTEHinter]表示A2 MR后跟A5 MR,最终触发了LTE间频切换。MR的触发首先取决于移动配置和服务单元格与邻近单元格的信号强度值。
我们旨在克服这些挑战,构建一个能够学习运营商特定切换策略的系统。更具体地说,我们的目标是构建一个轻量级、可扩展、上下文感知且可解释的切换预测系统。可解释的系统可以帮助理解切换策略的“黑盒”性质,并在预测过程中应用合理性检查。可转移的方案可以帮助系统通过启用具有相似地理特征和/或运营商部署策略的模型转移来扩展。任何涉及离线训练的解决方案将依赖于收集的数据集来学习切换策略,并且可能无法推广到未见的移动场景。轻量级系统避免了在能量受限的移动设备上进行实时预测的不必要开销。随着用户设备(UE)的移动,反应系统必须响应不断变化的无线环境。此外,除了预测切换外,上下文感知方法还可以考虑诸如无线接入技术(LTE、5G)和频段等因素,以告知应用程序未来可能的网络条件改善或恶化。
我们通过采用增量学习方案来实现我们的目标和设计原则,该方案随着更多数据到来而扩展系统的知识。与离线训练相比,我们的方法更具适应性。Prognos适应所有移动场景、地理位置和蜂窝运营商。Prognos学习到的切换逻辑揭示了运营商特定的切换决策。它还促进了预测过程中的合理性检查和动作空间的减少。例如,当设备使用LTE时,无法进行SCGM切换预测。最后,Prognos为应用程序输出一个有意义的值\(h\_score\),该值指定了由于切换而导致的网络容量的预期变化。我们利用蜂窝网络的领域知识来设计一个能够预测所有类型切换的系统。
设计¶
Danger
这一部分是设计的核心,架构得搞清楚!
Prognos是一个全面的切换预测系统,提供了由于切换引起的网络波动的有意义信息。该系统由三个关键组件组成(见附录A.4中的图17)。测量报告预测器模块考虑移动配置和信号强度质量来预测MRs。决策学习器模块通过利用序列模式挖掘的思想来学习运营商特定的切换决策逻辑。最后,切换预测器模块使用预测的MR序列和学习到的切换逻辑来预测切换类型。
测量报告预测:MR被触发后仅剩几毫秒(中位数为70毫秒)供应用程序做出决定。因此,报告预测器帮助提前预测切换,从而为应用程序提供足够的时间来最小化切换期间的QoE降级。要决定是否触发和向服务单元格报告测量事件,我们观察三个因素:(i)从服务单元格接收的测量事件的配置(阈值、触发时间(TTT)等),(ii)预测的服务单元格的接收信号强度(RRS),(iii)预测的邻近单元格的RRS。为了预测下一个预测窗口中服务单元格和邻近单元格的RRS,我们 将上一个历史窗口中的RRS值输入到一个轻量级的线性回归模型 中。使用 三角核 (triangle kernel)方法进行信号平滑,以消除小规模衰落和测量噪声引起的变化 。根据从服务单元格接收的配置和预测的RRS,我们预测下一个预测窗口中是否满足事件的触发条件。如果触发条件在TTT时间内满足,报告预测器模块将此预测发送给切换预测器模块。
基于策略的切换逻辑学习:决策学习器学习运营商使用的最新切换逻辑。决策学习器模块的输入是RRC层传递的连续MR和切换命令流。我们将输入流分为阶段——每个阶段由MR(或多个MR)后跟一个切换命令组成。在Prognos中,我们将学习到的决策逻辑称为模式,这是一系列反复触发特定类型切换的MR序列。切换决策学习算法的目标是为每种切换类型学习最新的模式。这种 基于序列的切换决策逻辑表述受到了序列模式挖掘的启发 。我们对prefixSpan算法进行了修改,使其能够在线学习模式。在每个阶段结束时,在线学习算法可能会采取以下两种操作之一:(i)如果观察到旧序列,则增加模式的支持计数;(ii)如果遇到新序列,则添加模式。算法根据新鲜度阈值淘汰旧模式。这里,新鲜度简单地意味着模式有多新。淘汰过程还确保学习到的切换模式数量不会过度增长。最后,阶段计数递增,我们等待新的切换来处理下一个阶段。
切换预测:为了预测切换,我们考虑到当前阶段收到的预测MR序列。该预测序列与决策学习器发送的所有学习到的切换模式进行匹配。如果在候选者中找不到任何模式,则切换预测器会做出“无切换”预测。否则,根据具有最高相似度的模式来预测切换类型。模式的相似度是其支持计数、长度和新鲜度的函数。最后,基于预测的切换类型和当前无线技术, Prognos生成一个\(h\_score \in (0, \infty)\)。该值表示由于切换而导致的预期网络容量改善或恶化(例如,\(h\_score = 0.4\) 表示吞吐量下降60%,而分数为1表示无切换或无恶化)。 它是从图16中报告的结果中经验性计算的。具体来说,我们计算切换前后网络容量的中位数变化,使用切换前后的吞吐量比。通常,\(h\_score\) 为1,表示“无切换”,因此由于切换而没有预期的吞吐量变化。
性能评估¶
我们使用基于跟踪的模拟来评估Prognos。我们从运行中的蜂窝网络中收集日志,使用之前概述的方法(§3),并重放这些跟踪。
数据集:我们收集了两个数据集。D1由7次跟踪组成,代表一个35分钟的旅游区步行环路。D2是在市中心地区10次25分钟的步行环路中收集的。两者都以20 Hz的频率记录OpX的数据。两者的主要区别在于D1仅有5G毫米波和LTE中频段覆盖,而D2还包括5G低频段覆盖。它们代表了美国的两个不同城市。我们在D1和D2中观察到超过320次和840次切换。数据具有不平衡的类别(即切换仅占总数据点的0.4%)。因此,我们在不受类别不平衡影响的指标上评估性能,例如F1分数、精度和召回率。
比较方法:我们将Prognos与两种最近的5G切换预测技术进行比较:1)Mei等人使用的梯度提升分类器(GBC)方法,该方法使用服务单元格和邻近单元格的信号强度质量等底层信息进行切换预测;2)使用移动设备位置信息预测切换的堆叠长短期记忆(LSTM)模型。与这些方法不同,Prognos不涉及离线训练。除非另有说明,我们使用数据集的60%作为这两种方法的训练集,剩余的40%作为所有预测方法的测试集。总共,我们的测试集包含超过3.5小时的蜂窝跟踪数据。为了报告结果,我们选择所有方法的预测和历史窗口为1秒。
结果:如§7.2所述,报告预测器模块使我们能够在MR被触发之前预测切换。平均而言,它使我们能够提前931毫秒预测切换,仅损失1.2%的准确率(见附录A.4中的图18)。表3比较了Prognos与其他方法在D1和D2上的性能。虽然GBC和堆叠LSTM模型有时可以实现高准确率,但它们的F1分数较低,突出了“盲”机器学习技术在产生可靠切换预测方面的无效性。另一方面,我们的系统在所有指标上表现良好,无需任何训练。我们的系统通过将切换预测任务分解为两个阶段来提高性能:(i)MR推断和(ii)运营商特定的切换决策逻辑。我们发现这种分离不仅有助于提高我们对模型的信心,而且还通过减少模型复杂性来提高准确率。此外,我们的系统扩展性好,因为它不仅可以学习新的切换模式,还可以移除不经常观察到的旧模式。在我们的数据集D1和D2中,每小时学习新的切换模式的速度为9.1 ± 2.3,而旧模式被移除的速度为每小时8.3 ± 3.1。移除过程确保学习到的模式数量不会过度增长,并且预测准确率保持稳定。
Prognos的应用场景¶
我们通过考虑两个资源密集型应用(16K全景视频点播和实时体积视频流媒体)来展示Prognos的可用性。我们对它们的速率适应算法进行了轻微的调整,以使用切换预测。
跟踪收集:我们通过饱和移动设备的下行信道来收集带宽跟踪,同时驾驶。我们将这些跟踪输入Mahimahi网络仿真工具。同时,我们使用XCAL收集蜂窝日志,即RRS值、MRs和切换命令等。我们对收集的日志进行后处理,使用滑动窗口在数据上生成40多个跟踪(每个跟踪持续240秒)。所有跟踪均为OpX收集的,包括5G(低频段和毫米波)和LTE(中频段)覆盖。为了避免质量级别选择变得简单,我们仅考虑平均带宽小于400 Mbps(最小带宽大于2 Mbps)的跟踪,遵循Mao等人的方法。实验设置:对于16K全景视频点播,我们的评估使用了自定义的16K全景视频,使用H.264/MPEG-4编码,共6个质量级别(720p、1080p、2K、4K、8K、16K)。此外,视频被分成60个块,总长度为120秒。我们扩展了Pensieve的设置,以利用Prognos提供的切换预测信息。实时体积视频流媒体则使用了之前在§3中描述的ViVO系统。我们禁用了ViVo的可见性优化,以进行公平的比较,并修改了其代码库,使其能够与我们的基于跟踪的模拟一起工作。使用Draco压缩的3分钟体积视频以5个点云密度级别(对应{43、77、110、140、170} Mbps的比特率)进行编码。
修改后的速率适应算法:对于这两个应用,我们通过将速率适应算法生成的预测吞吐量乘以Prognos提供的\(h\_score\)来纠正预测的吞吐量。我们的系统仅在预期切换时介入;在“无切换”情况下,我们不进行任何更改。为了评估,我们为每个应用修改了2-3个速率适应算法。这种方法可以应用于任何速率适应方案。
接下来,我们展示了如何使用切换感知速率适应来提高这两个应用的QoE。我们评估了三种类型的算法:(i)原始速率适应算法,如fastMPC;(ii)使用真实切换预测的算法,如fastMPC-GT;(iii)使用Prognos生成的切换预测的算法,如fastMPC-PR。我们的主要目的是展示我们系统的有效性;我们不比较速率适应方案的性能。
-
16K全景视频点播:图14a和14b比较了原始ABR算法与使用Prognos的速率适应算法(如RB、fastMPC和robustMPC)的性能。有三个主要结论:首先,原始ABR方案在切换期间的吞吐量预测准确率下降了37.14%至43.22%。图14b显示了fastMPC的平均吞吐量预测误差。其次,Prognos可以在切换期间提高吞吐量预测准确率,提高幅度为52.42%至61.29%,具体取决于ABR方案(图14b)。最后,我们发现我们的系统可以提高所有ABR方案和移动跟踪的QoE。如图14a所示,Prognos减少了34.6%至58.6%的卡顿,并将视频质量提高了1.72%。在绝对值上,QoE在卡顿方面与真实值相差0.05%至0.10%,在视频质量方面相差0.60%至0.99%。
-
实时体积视频流媒体:我们评估了ViVo和FESTIVE与使用切换预测的修改算法的性能。在图14c中,我们仅绘制了使用切换感知(真实和Prognos)速率适应算法与原始速率适应算法相比的改进情况。改进情况体现在两个指标上:视频比特率质量和卡顿时间。图14c表明Prognos提高了视频质量15.1%至36.2%,同时减少了0.24%至3.67%的卡顿时间。QoE改进在绝对值上,卡顿时间与真实值相差0.01%至0.25%,视频质量相差0.39%至2.49%。
总之,评估表明我们的系统在提高两个具有不同工作负载的应用的QoE方面是有效的。此外,我们使用相同的技术来提高这两个应用的吞吐量预测。