Date
December 18, 2013
Author
Luigi Rizzo
Recent prior work by this team developed several techniques to achieve very high packet rates both on bare metal and on virtual machines. Their netmap framework enables line rate send/receive (on a 10Gbit/s interface) with a single core at about 1 GHz. Using this framework they built a high speed Virtual Ethernet Switch moving up to 20 Mpps or 70 Gbit/s through its ports that allowed identifying and removing performance bottlenecks in the network I/O path of QEMU/KVM. The problems identified in this prior work and the solutions developed are common to other hypervisors. The current work is focused on the BHyVe hypervisor, applying techniques similar to those previously used. The work will cover four main activities: extend BHyVe with high performance emulated ordinary NICs (e.g. e1000); improve the network I/O path within BHyVe, adding support for other backends (such as our VALE switch, or NICs in Netmap mode) and addressing I/O bottlenecks, such as inefficient data mappings, copies, handoffs between functions or threads; investigate the use of multiple I/O threads to speed up and parallelize data transfers between the guest and the host; investigate the implementation of network I/O directly in the kernel of the host OS, similarly to what is done in QEMU through the vhost mechanism, to reduce latency