VPP配置网卡多队列no bufs问题
在VPP配置文件/etc/vpp/startup.conf中启用网卡的多队列功能,指定接收和发送队列的数量。
dpdk {
dev default {
num-rx-queues 4
num-tx-queues 4
}
在测试中发现如下情况,网卡的rx no bufs错误计数非常高。
vpp # show hardware-interfaces
GigabitEthernet2/0/0 1 up GigabitEthernet2/0/0
Link speed: 10 Gbps
RX Queues:
queue thread mode
0 vpp_wk_0 (1) polling
1 vpp_wk_1 (2) polling
2 vpp_wk_2 (3) polling
3 vpp_wk_3 (4) polling
Intel 82599
carrier up full duplex mtu 1500
flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum int-supported
Devargs:
rx: queues 4 (max 128), desc 1024 (min 32 max 4096 align 8)
tx: queues 4 (max 64), desc 1024 (min 32 max 4096 align 8)
tx frames ok 1228
tx bytes ok 73998
rx frames ok 2639
rx bytes ok 158486
rx missed 1106737634
rx no bufs 379872039168
网上查询可以通过增加socket-mem或者num-mbufs的数量来解决此问题,但是VPP目前都不再支持这两个参数。查询startup.conf文件,发现可通过设置buffers-per-numa解决。其默认值为16384,修改为128000之后,问题解决。
## Increase number of buffers allocated, needed only in scenarios with
## large number of interfaces and worker threads. Value is per numa node.
## Default is 16384 (8192 if running unpriviledged)
buffers {
buffers-per-numa 128000
default data-size 2048
VPP启动之后可见,buffers的总数量为128016,可用的还有97560,使用了29845,默认值16384明显不够使用。
vpp# show buffers
Pool Name Index NUMA Size Data Size Total Avail Cached Used
default-numa-0 0 0 2752 2048 128016 97560 611 29845
VPP文件src/vlib/buffer.c定义了默认的buffers数量。
#define VLIB_BUFFER_DEFAULT_BUFFERS_PER_NUMA 16384
#define VLIB_BUFFER_DEFAULT_BUFFERS_PER_NUMA_UNPRIV 8192