Microbenchmarks¶
Number of Final Images ≪ Number of Collected Images¶
Table 3 quantifies the proportion of latency-sensitive images across different days. For the California forest fire application, just 0.36% of the images pass the geographic filter (California) and just 0.26% of the images pass the Forest filter. The number of actual fire images is just 243. Therefore, in Serval, the satellite needs to analyze at most 26k images and select 243 latency-sensitive images to download. For the vessel in ports application, we find 1769 images containing vessels at ports. These small final image counts indicate the potential latency benefits from transferring only a small (but just right!) set of images from the satellite down to the cloud.
Preemptive Compute at Ground Station¶
We estimate the burden on the ground station to run the required preemptive computation task by profiling it on a large cluster of computation nodes. The results show that on one data-center-grade GPU such as NVIDIA Tesla K80, it takes an average of 4.39s to run the forest detector on one image taken by the satellite. Running a forest detector on all California images across ten days takes no more than 24 GPU hours. This cost is distributed across multiple ground stations (at least 12 stations in our setup) and due to the station’s higher power, this is preferable to running the same compute on the satellites. Further, because such data is glacial, this station runtime (O(hours)) suffices to update the satellite’s glacial filters and auxiliary information once a day.
Hardware Emulation¶
We tested the system load of a typical satellite under Serval via hardware emulation. We emulate the satellite with a Raspberry Pi serving as an onboard control system, connected to a Jetson ORIN serving as the computation system. The Raspberry Pi “captures" images and sends them to the Jetson for running on-board computation. After receiving results from the Raspberry Pi, the Jetson will execute Serval’s execution engine (Section 3). The Raspberry Pi will send images over a TCP connection for emulating the satellite-ground station link, to a dedicated server. We emulated the time period from 17:30 on July 19, 2021, to 18:30 on July 19, 2021: when satellite “103b" flies over California and does on-board computation on the interesting images.
Figure 6 shows the variation of system load over time. We observe that before 18:05, the Jetson periodically wakes up to do heartbeat communication with the controller (Raspberry Pi) and consumes low energy. When the satellite starts to pass over California, we see a significant increase in all types of resource usage on the Jetson machine, especially GPU usage, because the applications require running neural networks on the GPU. After the satellite passes California, the CPU, GPU utilization, and power consumption drop to normal levels. We use the numbers obtained from these emulations to benchmark different filters in our simulated at-scale evaluation below.
We trained different deep learning models (as described above) to perform tasks of cloud detection, forest detection, vessel detection, and forest fire detection. Table 4 presents the average run time of each filter on the Jetson device. Specifically, we report the time taken to classify one image using the filter.