Distrinet: A Mininet Implementation for the Cloud¶
gemini先过
(1) Introduction & Motivation
-
背景:
- 网络仿真(Emulation)是评估 SDN(软件定义网络)方案的重要手段, Mininet 是其中最流行的工具
- Mininet通过轻量级虚拟化技术(进程+命名空间)在单机上模拟网络
-
问题:
- Mininet 设计之初仅支持单机运行. 当实验所需的 CPU 或内存资源超过单台物理机的上限时, 实验结果可能不准确, 甚至无法运行
-
目标:
- 开发一个能将 Mininet 扩展到多台物理主机(如 Linux 集群或 Amazon EC2 云平台)的工具, 同时保持与 Mininet API 的兼容性
(2) Distrinet 的核心贡献
-
分布式架构: 支持在多台主机上分布运行虚拟节点, 利用 LXD/LXC 容器技术提供比 Mininet 更强的隔离性
-
兼容性: 完全兼容 Mininet 的 API. 这意味着现有的 Mininet Python 脚本只需极少(甚至无需)修改即可在 Distrinet 上运行
-
云原生支持: 支持自动在 Amazon EC2 等云平台上配置实验环境
-
功能优势: 相比其他分布式方案(如 Maxinet 或 Mininet Cluster Edition), Distrinet 在链路带宽限制(Traffic Control)的处理上更加完善, 且支持运行复杂的 VNF(虚拟网络功能)
(3) Architecture
Distrinet 的架构设计主要包含以下几个关键部分:
-
物理架构:
- Client: 运行实验脚本, 决定虚拟节点的放置策略
- Master: 作为中继节点, 通过 SSH 连接 Client 和所有的 Worker
- Worker(s): 实际承载虚拟节点(vHosts 和 vSwitches)的物理主机
-
虚拟化技术:
- 节点隔离: 不同于 Mininet 使用的轻量级命名空间, Distrinet 使用 LXC 容器 来模拟节点
- 这允许在异构物理机上提供一致的软件环境
- 控制平面: 使用 SSH 替代本地系统调用(如 Popen)来控制远程节点, 并通过本地伪终端(PTY)保持与 Mininet 代码的交互兼容性
- 网络连接:
- 跨主机的虚拟链路通过 VXLAN 隧道 实现, 支持二层流量传输
- 利用 Linux Traffic Control (tc) 实现带宽限制
- 节点隔离: 不同于 Mininet 使用的轻量级命名空间, Distrinet 使用 LXC 容器 来模拟节点
-
部署工具: 使用 Ansible 自动配置基础设施(安装 LXD, OVS 等)
(4) Evaluation
论文通过对比实验(对比 Mininet, Maxinet, Mininet CE)得出了以下结论:
- 启动开销:
- 由于使用了 LXC 容器, Distrinet 的节点创建和拓扑构建速度比原生 Mininet 慢(例如构建 Fat Tree k=4 拓扑, Mininet 需 2.6秒, Distrinet 需 73.7秒)
- 但在长运行时间的实验中, 这一初始化开销可忽略不计
- 网络性能: 在吞吐量测试中, Distrinet 的表现与原生 Mininet 非常接近, 并未因分布式架构引入明显的性能瓶颈
- 高负载能力:
- 在 Hadoop Benchmark 测试中, 单机 Mininet 在资源不足时表现出不稳定的执行时间
- 而 Distrinet 通过增加物理主机数量, 能够线性地提升计算性能, 成功支持资源密集型实验
Introduction¶
Modern networks became so complex and implementation-dependent that it is now impossible to solely rely on models or simulations to study them. On one hand, models are particularly interesting to determine the limits of a system, potentially at very large scale or to reason in an abstract way to conceive efficient networks. On the other hand, simulations are pretty handy to study the general behavior of a network or to get high confidence about the applicability of new concepts. However, these methods do not faithfully account for implementation details. To this end, emulation is more and more used to evaluate new networking ideas. The advantage of emulation is that the exact same code as the production one can be used and tested in rather realistic cases helping to understand fine-grained interactions between software and hardware. However, emulation is not the reality and it often needs to deal with scalability issues for large and resource-intensive experiments.
When it comes to Software Defined Networking (SDN), Mininet [13] is by far the most popular emulator. The success of Mininet comes from its ability to emulate potentially large networks on one machine, thanks to lightweight virtualization techniques and a simple yet powerful API. Mininet was designed to run on one single machine, which can be a limiting factor for experiments with heavy memory or processing capacity needs. A solution to tackle this issue is to distribute the emulation over multiple machines. However, as Mininet was designed to run on a single machine, it assumes that all resources are shared and directly accessible from each component of an experiment. Unfortunately, when multiple machines are used to run an experiment, this assumption does not hold anymore and the way Mininet is implemented has to be revised. In this paper, we present Distrinet[11], an extension of Mininet implemented to allow distributed Mininet experiments to leverage resources of multiple machines when needed.
Challenge. Mininet is used by a large community ranging from students to researchers and network professionals. This success of Mininet comes from the simplicity of the tool: it can work directly on a laptop and its installation is trivial. The challenge is to extend Mininet in such a way that these conditions still hold, while being distributed over multiple machines. Distrinet allows applications to run in isolated environments by using LXC to emulate virtual nodes and switches, avoiding the burden of virtual machine hypervisors. Distrinet also creates virtual links with bandwidth limits without any effort from the user.
Contributions. Mininet programs can be reused with minimal or even without any changes in Distrinet, but with a higher degree of confidence on the results in case of resource intensive experiments. Our main contributions to reach this objective can be summarized as follows.
• Compatibility with Mininet. Mininet experiments are compatible with Distrinet, either using the Mininet API or with the Mininet Command Line Interface (i.e., mn).
• Architecture. Distrinet is compatible with a large variety of infrastructures: it can be installed on a single computer, a Linux cluster, or the Amazon EC2 cloud. Distrinet relies on prominent open source projects (e.g., Ansible and LXD) to set up the physical environment, and guarantee isolation.
• Comparison with other tools and link bandwidth. Comparisons with Mininet Cluster Edition [17] and Maxinet [19] show that our tool handles more efficiently link bandwidth which is a fundamental brick of network emulation.
• Flexibility. Thanks to the usage of LXC, Distrinet allows to run VNFs or generic containers on the emulated topology composed of virtual switches and hosts. Each virtual node is properly isolated and the virtual links can be capped natively.
In this paper, we first discuss the advantages and limitations of existing emulation tools in Sec. 2. We present the architecture of Distrinet in Sec. 3 and how it is integrated in an environment in Sec. 3.2. We then evaluate Distrinet by comparing the emulation results using Mininet, Maxinet, Mininet Cluster Edition and Distrinet in Sec. 4. Last, we discuss the current and future work on Distrinet in Sec. 5 and conclude in Sec. 6.
现代网络已变得日益复杂且高度依赖于具体实现, 仅依靠理论模型或数值模拟已无法满足对其进行深入研究的需求:
一方面, 理论模型在确定系统极限(特别是在大规模场景下)或通过抽象推理来构思高效网络架构方面具有显著价值.
另一方面, 数值模拟在研究网络整体行为或验证新概念的适用性方面具有较高的置信度与便捷性.
然而, 上述方法无法准确反映底层的实现细节. 为此, 网络仿真(Emulation)日益成为评估网络新方案的重要手段. 网络仿真的优势在于能够直接运行生产环境代码, 并在高度逼真的场景中进行测试, 从而有助于深入理解软硬件之间的细粒度交互机制. 尽管如此, 网络仿真并非真实物理网络, 在面对大规模及资源密集型实验时, 往往面临可扩展性方面的挑战.
在软件定义网络(SDN)领域, Mininet [13] 是目前应用最为广泛的网络仿真器. Mininet 的成功得益于其利用轻量级虚拟化技术以及简单而强大的 API, 能够在单台机器上仿真大规模网络的能力. 然而, Mininet 专为单机运行设计, 这对于内存或计算能力需求较高的实验而言, 成为了一个限制因素.
解决该问题的一个方案是将仿真任务分布在多台机器上运行.
但是, 由于 Mininet 的单机设计初衷, 它假设所有资源对于实验中的每个组件而言都是共享且可直接访问的. 遗憾的是, 当使用多台机器运行实验时, 这一假设不再成立, 因此必须重新审视 Mininet 的实现方式.
在本文中, 我们将介绍 Distrinet [11]. 作为 Mininet 的扩展实现, 它允许分布式 Mininet 实验在必要时利用多台机器的资源.
挑战
Mininet 拥有庞大的用户群体, 涵盖学生, 研究人员及网络专业人士. Mininet 的成功源于其工具的简洁性: 它可直接在笔记本电脑上运行, 且安装过程极其简便. 主要挑战在于对 Mininet 进行扩展, 使其在分布于多台机器运行时, 仍能保持上述优势
Distrinet 通过使用 LXC 仿真虚拟节点和交换机, 使应用程序在隔离环境中运行, 从而避免了虚拟机管理程序(Hypervisor)带来的开销负担
此外, Distrinet 能够自动创建带有带宽限制的虚拟链路, 无需用户进行额外操作
贡献
在 Distrinet 中, Mininet 程序仅需极少修改甚至无需修改即可复用, 且在资源密集型实验中能提供可信度更高的结果. 为实现这一目标, 我们的主要贡献总结如下:
- 与 Mininet 的兼容性: Mininet 实验与 Distrinet 兼容, 无论是通过 Mininet API 还是命令行接口(即 mn)均可运行.
- 架构: Distrinet 兼容多种基础设施: 它可部署于单台计算机, Linux 集群或 Amazon EC2 云平台. Distrinet 依赖于主流开源项目(如 Ansible 和 LXD)来配置物理环境并保障隔离性.
- 与其他工具的对比及链路带宽管理: 与 Mininet Cluster Edition [17] 和 Maxinet [19] 的对比表明, 本工具在处理链路带宽(网络仿真的基石)方面更为高效.
- 灵活性: 得益于 LXC 的使用, Distrinet 允许在由虚拟交换机和主机组成的仿真拓扑上运行 VNF(虚拟网络功能)或通用容器. 每个虚拟节点都实现了适当的隔离, 且虚拟链路原生支持带宽限制.
本文第 2 节首先讨论现有仿真工具的优缺点. 第 3 节介绍 Distrinet 的架构, 第 3.2 节详述其环境集成方式. 随后, 第 4 节通过对比 Mininet, Maxinet, Mininet Cluster Edition 和 Distrinet 的仿真结果来评估 Distrinet. 最后, 第 5 节讨论 Distrinet 的现状与未来工作, 并于第 6 节进行总结.
Related Work¶
Emulation allows to test the performances of real applications over a virtual network. A first frequently used tool to emulate networks is the Open vSwitch (OVS) software switch [5]. To build a virtual network, virtual switches (vSwitches) can be connected with virtual interfaces, through GRE or VXLAN tunnels. To emulate virtual hosts (vHosts), one can use containerization tools (e.g., LXC [14] or Docker [15]) or full virtualization tools (e.g., Virtual Box [18]). Graphical Network Simulator-3 (GNS3) [12] is a software emulating routers and switches in order to create virtual networks with a GUI. It can be used to emulate Cisco routers and supports a variety of virtualization tools such as QEMU, KVM, and Virtual Box to emulate the vHosts. Mininet [13] is the most common Software Defined Networking (SDN) emulator. It allows to emulate an SDN network composed of hundreds of vHosts and vSwitches on a single host. Mininet is easy to use and its installation is trivial. As we show in Sec. 4, it is possible to create a network with dozens of vSwitches and vHosts in just a few seconds. Mininet is very efficient to emulate network topologies as long as the resources required for the experiments do not exceed the ones that a single machine can offer. If physical resources are exceeded, the results might be not aligned with the ones of a real scenario.
The tools closest to ours are Maxinet [19] and Mininet Cluster Edition (Mininet CE [17]). They allow to distribute Mininet on a cluster of nodes. Maxinet creates different Mininet instances in each physical node of the cluster, and connects the vSwitches between different physical hosts with GRE tunnels. Mininet CE extends directly Mininet in order to distribute the vNodes and the vLinks in a set of machines via GRE or SSH tunnels. Containernet [16] allows to extend Mininet to support Docker containers. By default, it is not able to distribute the emulation on different nodes, but it is possible to combine it with Maxinet or Mininet CE to support such an option and provide better vNodes isolation. While the Maxinet approach makes it possible to increase the scalability of Mininet and offers a speed-up in terms of virtual network creation time for certain topologies, its main drawback is that it is not directly compatible with Mininet. Moreover, even though it is straightforward to setup networks with unlimited vLinks (i.e., vLinks without explicit bandwidth limit or delay), Maxinet does not fully support limited vLinks (i.e., vLinks with explicit bandwidth limits or delay). The Mininet CE approach offers a full compatibility with Mininet, but like Maxinet, it has some limitations when it come to emulate vLinks with limited bandwidth or delay. It is not possible to add limitations on the vLink if it is connected between 2 vNodes in different physical machines [4]. We believe that automatic cloud provision offered by Distrinet, its flexibility, and its compatibility with Mininet give our tool an important added value as Mininet is by far the most used tool to emulate SDN networks. Table 1 summarises the main differences between the tools.
回顾了网络仿真领域的现有工具, 从通用的虚拟化技术到单机 SDN 仿真, 再到分布式仿真解决方案, 并重点分析了现有分布式工具的局限性以及 Distrinet 的优势.
(1) 通用网络仿真与虚拟化基础
网络仿真允许在虚拟网络上测试真实应用程序的性能. 构建虚拟网络通常依赖以下组件和工具:
- 虚拟交换机 (vSwitches):
- Open vSwitch (OVS) [5]: 最常用的软件交换机. 通过虚拟接口连接, 利用 GRE 或 VXLAN 隧道构建网络
- 虚拟主机 (vHosts) 仿真技术:
- 容器化工具: 如 LXC [14] 或 Docker [15]
- 全虚拟化工具: 如 Virtual Box [18]
- 图形化网络模拟器:
- GNS3 [12]: 提供 GUI 来模拟路由器(如 Cisco)和交换机
- 支持多种后端(QEMU, KVM, Virtual Box)来仿真虚拟主机
(2) SDN 仿真领域的标准: Mininet
Mininet [13] 是目前最通用的 SDN 仿真器, 其特点如下:
- 能力: 在单台主机上仿真由数百个 vHosts 和 vSwitches 组成的 SDN 网络
- 优点:
- 易于使用, 安装极其简单
- 拓扑创建效率极高(几十个节点仅需数秒)
- 局限性:
- 资源受限: 仅限于单台机器的资源. 一旦实验需求超过单机物理资源(如内存/CPU), 仿真结果将无法反映真实场景, 甚至不可信
(3) 分布式 SDN 仿真工具 (Mininet 的扩展尝试)
为了解决 Mininet 的单机资源限制, 社区出现了将仿真分布到集群的工具, 主要是 Maxinet 和 Mininet Cluster Edition (Mininet CE). 此外还有 Containernet
- Maxinet [19]:
- 机制: 在集群的每个物理节点上创建不同的 Mininet 实例, 通过 GRE 隧道连接不同主机间的 vSwitches
- 优点: 提高了扩展性, 特定拓扑下创建速度更快
- 缺点:
- 不兼容: 无法直接兼容 Mininet API
- 链路限制支持差: 虽然支持无限制链路, 但不支持带有明确带宽或延迟限制的 "limited vLinks"
- Mininet Cluster Edition (Mininet CE) [17]:
- 机制: 直接扩展 Mininet, 通过 GRE 或 SSH 隧道分布 vNodes 和 vLinks
- 优点: 与 Mininet 全面兼容
- 缺点:
- 链路限制缺陷: 无法在跨物理机器的两个 vNodes 之间添加链路带宽或延迟限制 [4]
- Containernet [16]:
- 机制: 扩展 Mininet 以支持 Docker 容器
- 分布能力: 默认不支持分布式, 但可与 Maxinet 或 Mininet CE 结合使用以支持多节点和更好的隔离
(4) Distrinet 的独特价值
相比上述工具, Distrinet 提供了更完善的解决方案:
- 自动化: 支持自动化的云资源配置(Automatic cloud provision)
- 灵活性: 架构灵活
- 兼容性: 保持与 Mininet 的高度兼容
- 核心优势: 解决了 Maxinet 和 Mininet CE 在链路带宽限制(Limited vLinks)上的痛点
| 工具名称 (Tool) | 架构/范围 (Scope) | 核心机制 (Mechanism) | 优点 (Pros) | 缺点/局限性 (Cons/Limitations) |
|---|---|---|---|---|
| Mininet | 单机 (Single Host) | 轻量级虚拟化, 单机进程 | 安装简单, 使用便捷, 极速拓扑创建 | 资源受限于单机, 大规模实验结果不可靠 |
| Maxinet | 分布式集群 (Cluster) | 多 Mininet 实例 + GRE 隧道 | 扩展性好, 特定场景速度快 | 不兼容 Mininet API; 不支持带带宽/延迟限制的链路 (Limited vLinks) |
| Mininet CE | 分布式集群 (Cluster) | 扩展 Mininet + GRE/SSH 隧道 | 完全兼容 Mininet | 跨物理机通信时, 无法添加链路限制 (No limitations on vLinks) |
| Containernet | 单机 (可扩展为分布式) | Mininet + Docker 容器 | 支持 Docker 生态 | 需配合其他工具实现分布式 |
| Distrinet | 分布式 / 云 (Cloud) | LXC + VXLAN + Ansible | 自动云部署, 兼容 Mininet, 原生支持带限制的链路 | (文中主要强调其对于上述缺点的解决) |

Architecture¶
Four key elements have to be considered in order to distribute Mininet experiments over multiple hosts. First, emulated nodes must be isolated to ensure the correctness of the experiments even when the hosts supporting the experiments are heterogeneous. To obtain these guarantees, virtualization techniques (full or containerbased) have to be employed. Similarly, traffic encapsulation is needed such that the network of the experiment can run on any type of infrastructure. To start and manage experiments, an experimentation control plane is necessary; this control plane allows to manage all the emulated nodes and links of the experiment, regardless of where they are physically hosted. Finally, if the deployment of an experiment in Mininet is sequential and generally does not severely affect the overall experimental time, when the experiment is distributed, parallelization is required as the deployment of nodes can be slow because data may have to be moved over the network.
将 Mininet 实验分布在多台主机上运行时, 必须考量四个关键要素:
首先, 为确保异构宿主机环境下实验的正确性, 仿真节点(Emulated Nodes)必须实现隔离. 为获得此保证, 必须采用虚拟化技术(全虚拟化或基于容器的虚拟化).
同理, 为使实验网络能在任意类型的基础设施上运行, 需要引入流量封装机制.
此外, 为启动和管理实验, 必须构建一个实验控制平面(Experimentation Control Plane); 该控制平面负责管理实验中所有的仿真节点和链路, 无论其物理位置如何.
最后, Mininet 中实验的部署通常是顺序执行的, 这对整体实验时间影响较小; 但在分布式实验中, 由于数据可能需要跨网络传输, 节点的部署速度可能较慢, 因此并行化部署机制是必不可少的.
3.1 Multi-host Mininet implementation¶
In Mininet, network nodes are emulated as user-level processes isolated from each other by means of light virtualization. More precisely, a network node in Mininet is a shell subprocess spawned in a pseudo-tty and isolated from the rest by the means of Linux cgroups and network namespaces. Interactions between Mininet and the emulated nodes are then performed by writing bash commands to the standard input of the subprocess and reading the content at the standard output and error of that process. As Mininet runs on a single machine, every emulated node benefits from the same software and hardware environments (i.e., the one from the experimental hosts). This approach has proven to be adequate for single-machine experiments but cannot be directly applied when experiments are distributed, as it would push too much burden in preparing the different hosts involved in the experiments. As a consequence, we kept the principle of running a shell process but instead of isolating it using cgroups and network namespaces, we isolated it within an LXC container [14]. Ultimately, LXC realizes isolation in the same way than using kernel cgroups and namespaces, but it provides an effective tool suite to set up any desired software environment within the container just by providing the desired image when launching the container. In this way, even when the machines used to run an experiment are set up differently, as long as they have LXC installed on them, it is possible to create identical software environments for all the network nodes. In Distrinet, to start a network node, we first launch an LXC container and create a shell subprocess in that container. As Mininet runs on a single machine, the experiment orchestrator and the actual emulated nodes run on the same machine, which allows to directly read and write on the file descriptors of the bash process of the network nodes to control them. In Distrinet, we allow to separate the node where the experiment orchestration is performed from the hosts where the network nodes are hosted, meaning that directly creating a process and interacting with its standard I/Os is not as straightforward as in Mininet. Indeed, Mininet uses the standard Popen Python class to create the bash process at the basis of network nodes. Unfortunately, Popen is a low-level call in Python that is limited to launching processes on the local machine. In our case, we then have to rely on another mechanism. As we are dealing with remote machines and want to minimize the required software on the hosts involved in experiments, we use SSH as a means to interact between the orchestrator and the different hosts and network nodes. SSH is used to launch containers and once the container has been launched, we directly connect through SSH to the containers and create shell processes via SSH calls. In parallel, we open pseudo-terminals (PTYs) locally on the experiment orchestrator, one per network node, and attach the standard input and outputs of the created remote processes to the local PTYs. As a result, the orchestrator can interact with the virtual nodes in the very same way as Mininet does by reading and writing in the file descriptors of the network nodes' PTY. This solution may look cumbersome and suboptimal but it maximizes the Mininet code reuse, and ultimately guarantees compatibility with Mininet. Indeed, Mininet heavily relies on the possibility to read and write via file descriptors, the standard input and outputs of the shell processes emulating the virtual nodes, and massively uses select and poll that are low-level Linux calls for local files and processes. Therefore, providing the ability to have local file descriptors for remote process standard input and outputs allowed us to directly use Mininet code as the only change needed was in the creation of the shell process (i.e., using an SSH process creation instead of Popen), with no impact on the rest of the Mininet implementation. Solutions that would not offer low-level Linux calls compatibility to interact with the remote shell would cause to re-implement most of the Node classes of Mininet.In Mininet, network nodes and links are created sequentially. The sequential approach is not an issue in Mininet where interactions are virtually instantaneous. However, a sequential approach is not appropriate in Distrinet since nodes are deployed from LXC images and because every interaction with a node is subject to network delays. For this reason, in Distrinet the node deployment and setup calls are made concurrent with the Asynchronous I/O library of Python 3. However, as the compatibility with Mininet is a fundamental design choice, by default all calls are kept sequential and we added an optional flag parameter to specify the execution to run in concurrent mode. When the flag is set, the method launches the commands it is supposed to run and returns without waiting for them to terminate. The programmer then has to check if the command is actually finished when needed. To help in this, we have added a companion method to each method that has been adapted to be potentially non blocking. The role of the companion method is to block until the command calls made by the former are finished. This allows one to start a batch of long lasting commands (e.g., startShell) at once, then wait for all of them to finish. We have chosen to use this approach instead of relying to callback functions or multi-thread operations in order to keep the structure of the Mininet core implementation. To implement network links, Mininet uses virtual Ethernet interfaces and the traffic is contained within the virtual links thanks to network namespaces. When experiments are distributed, links may have to connect nodes located on different hosts, hence an additional mechanism is required. In Distrinet, we implement virtual links by using VXLAN tunnels (a prototype version with GRE tunnels also exists). The choice of VXLAN is guided by the need of transporting L2 traffic over the virtual links. In particular, we cannot rely on the default connection option provided directly with LXD. Indeed, the latter uses either multicast VXLAN tunnels or Fan networking [8] to interconnect containers hosted on different machines. However, cloud platforms such as Amazon EC2 do not allow the usage of multicast addresses and in some scenarios, a single physical machine may have to host hundreds of containers. In Distrinet, each link is implemented with a unicast VXLAN tunnel having its own virtual identifier. Also, since we are compatible with Mininet, to limit the capacity of the links, we simply use the Mininet implementation that relies on Linux Traffic Control (tc). SSH is used to send commands and retrieve data from the network nodes in the experiments, and each virtual node is reachable with an IP address. To do so, a bridge, called admin bridge, is setup on every machine that hosts emulated nodes. An interface, called admin interface, is also created on each node and bridged to the admin bridge and is assigned a unique IP address picked up from the same subnet. All these admin bridges are connected to the admin bridge of the master node. The machine running the script is then hooked with an SSH tunnel to the master host and can then directly access any machine connected to the admin bridge. The general architecture of Distrinet is presented in Fig. 1.
核心实现机制: Multi-host Implementation

-
虚拟化技术(LXC vs 命名空间):
- 放弃了 Mininet 原生的轻量级命名空间(Namespaces), 改用 LXC 容器
- 目的: 确保在配置各异的物理机上, 所有虚拟节点拥有完全一致的软件环境
-
控制与交互(SSH + PTY 伪终端):
- 挑战: Mininet 依赖本地文件描述符(
Popen)控制节点, 这在分布式环境下失效 - 方案: 使用 SSH 连接远程容器, 但在本地编排器上创建 伪终端(PTY) 映射远程流
- 优势: 通过模拟本地文件描述符, 最大程度复用了 Mininet 的核心代码(如
select/poll调用), 无需重写大量底层逻辑
- 挑战: Mininet 依赖本地文件描述符(
-
网络链路(VXLAN):
- 使用 单播 VXLAN 隧道 实现跨主机的虚拟链路(二层流量)
- 选择单播的原因: 兼容公有云(如 AWS), 因为云环境通常禁用多播
- 带宽限制: 沿用 Mininet 的
tc(Traffic Control) 机制
-
并发部署(Async I/O):
- 引入 Python 3 的异步 I/O 库来并行部署节点, 解决 LXC 镜像部署慢的问题
- 保留了"配套方法"(Companion Method)以支持类似 Mininet 的顺序执行逻辑, 保障 API 兼容性
3.2 Infrastructure provisioning¶
Distrinet provides an infrastructure provisioning mechanism that uses Ansible to automatically install and configure LXD and SSH on each machine to be used during the experiment. If the experimental infrastructure is Amazon EC2, Distrinet first instantiates a Virtual Private Cloud (VPC) configured as depicted in Fig. 2 in which the virtual instances running the experiment will be deployed. A NAT gateway is automatically created to provide Internet access to the Worker host. Access to the Worker nodes from the experimenter machine is ensured by a Master node acting as an SSH relay. The deployment on Amazon EC2 only requires an active Amazon AWS account. The Distrinet environment (cloud or physical) includes the three following entities as shown in Fig. 1:
• Client: host in which the Distrinet script is running and decides where to place the vNodes around the physical infrastructure (round-robin by default). The Client must be able to connect via SSH to the Master host.
• Master: host that acts as a relay to interconnect the Client with all the Worker hosts. It communicates with the Client and the different Workers via SSH. Note that the Master can also be configured as a Worker.
• Worker(s): host(s) where all the vNodes (vSwitches and vHosts) are running. vNodes are managed by the Master and the Client, via the admin network.
Distrinet can then automatically install the remaining requirements. In particular, it installs and configures LXD/LXC and OpenVSwitch in the Master and Worker hosts. After that, Distrinet downloads two images: an Ubuntu:18.04 image to emulate the vHosts, and a modified version of that image with OVS installed in order to save time during the configuration process. A default configuration setup is provided, but the user – by following the tutorial we provide [2] – can easily create a personalized image and distribute it in the environment using Ansible from the Master Node. After the configuration step, the user can start the emulation from the Distrinet Client.

- 自动化配置:
- 利用 Ansible 自动在主机上配置 LXD, SSH 和 OVS
- 支持 AWS 自动创建 VPC 和 NAT 网关
- 三大实体架构:
- Client (客户端): 运行实验脚本, 决定节点放置策略(默认轮询)
- Master (主节点): 作为 SSH 中继/网关, 连接 Client 和所有 Workers, 管理 Admin 网络
- Workers (工作节点): 实际承载虚拟节点(vHosts/vSwitches)的物理主机
- 管理网络: 通过 Admin Bridge 和 SSH 隧道, 确保控制平面能直接访问任何计算节点