In Part 1, we established the foundational concepts by delving into the theories supporting NFV and DPDK, along with the observed throughput achieved across a series of experiments. Now, in this second part, I will delve into the optimal approach for deploying DPDK applications on GCP, utilizing FD.io VPP and TestPMD as prime examples.
Furthermore, I’d like to address an additional point that was overlooked in the previous article: the attainable metrics when utilizing the largest available machine for FD.io VPP, specifically the h3-standard-88
, with just a single PMD thread. In Part 1, a similar yet distinct experiment was conducted, employing VPP on a c3-highcpu-4
instance restricted to 10Gbps of traffic. While VPP achieved the 10Gbps threshold, the extent of further potential remained ambiguous. This experiment serves a dual purpose:
- Presently, 200Gbps / 100 Mpps of throughput is solely achievable on the largest available machines. However, in the future, Google might reassess this spec, and understanding the maximum capacity of the underlying infrastructure with minimal resources is crucial for making informed decisions.
- Secondly, it delves into the optimization of DPDK and FD.io VPP. While the initial post indicated the robust development and scalability of these technologies, their true efficacy can only be discovered when operating under minimal resource constraints.