(→Configuration) |
(→Ixy) |
||
Line 127: | Line 127: | ||
* Single thread | * Single thread | ||
* Software CRC only | * Software CRC only | ||
+ | |||
+ | === Performance === | ||
+ | |||
+ | Passed performance tests to transmit to 10K connections | ||
=== Configuration === | === Configuration === |
UDPTX is BrandMeister-own UDP communication library, used to transmit and receive UDP traffic fast. It is very important for BrandMeister to spend less time to send and receive packets, it makes transmission (and finally sound) more smooth.
At this moment BrandMeister provides several backends (options) to send outgoing UDP:
This is standard default backend that uses Berkley sockets for sending a traffic. It tries to send the data in non-blocking mode and has special transmission thread to re-send failed packets.
Passed performance tests to transmit to 5K connections
transmitter = "socket";
This is fast forwarding backend that uses RAW (PACKET_MMAP) socket of Ethernet interface for sending a traffic. It allows to save up to 50% CPU time and has great compatibility.
Passed performance tests to transmit to 20K connections
transmitter = "raw:<interface name>";
transmitter = "raw:eth0";
This is faster forwarding backend that uses AF_XDP socket of Ethernet interface for sending a traffic and in most cases communicates directly with Linux network interface driver.
Passed performance tests to transmit to 30K connections
transmitter = "xdp:<interface name>";
transmitter = "xdp:eth0";
This is fastest forwarding backend that uses kernel-bypass NIC driver for sending a traffic. It allows to save much more CPU time due to direct poll communications to the NIC and CRC offload features of some NIC models. In some tests we got up to 75% acceleration. List of supported NIC models can be found here.
Passed performance tests to transmit to 40K connections
transmitter = "Modules/DPDK-edge.so:<reference interface> <EAL parameters> [--] <module parameters>";
transmitter = "Modules/DPDK-edge.so:eth0 -w 0000:af:00.0 --file-prefix bm --lcores '(0-64)@0' -- -c 1 -q 2048 -b 1 -l 4096";
# /etc/systemd/system/brandmeister@.service.d/override.conf [Service] User=root
All these parameters are optional and override default settings of PMD or DPDK module
(-c) --core-ratio <n> - ratio between NIC queues and DPDK cores (-q) --queue-size <n> - set PMD queue size to <n> slots (instead of automatically generated) (-b) --batch-size <n> - set maximum batch size to <n> slots (instead of automatically generated) (-l) --buffer-length <n> - set workers buffer length to <n> slots (instead of default value of 2048) (-p) --pthresh <n> | (-h) --hthresh <n> | PMD specific threshold values: (-w) --wthresh <n> | https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html#configuration-of-transmit-queues (-r) --rs-thresh <n> | (-f) --free-thresh <n> | (-s) --software-crc - force software CRC calculation
Ixy is very experimental and light user-space network driver. At this moment it supports Intel 82599ES family (aka Intel X520) and virtio. Please read Ixy documentation.
Passed performance tests to transmit to 10K connections
transmitter = "Modules/Dixie.so:[reference interface] <PCI address> <MAC address> [<queue-count> [<buffer-length> <batch-length>]]";
transmitter = "Modules/Dixie.so:eth0 0000:af:00.0 00:1b:21:a5:a0:2c 1 128 32";'
Ixy doesn't provide method to resolve MAC address directly from NIC, so address has to be defined in configuration. We found Ixy has a bug with queue count > 1 on ixgbe, so we do not recommend to use multiple queue.
In reception part UDPTX's driver works in parallel with socket receiver. All it does, is accelerate reception of UDP packets on particular interface.
This is modern method to accelerate UDP reception in BrandMeister Core. It allows to save up to 30% CPU time.
receiver = "Modules/ExpressFilter.o:<interface name>";
receiver = "Modules/ExpressFilter.o:eth0";
This is method is fully the same as eBPF + AF_XDP but uses small additional daemon XDPHelper to load and share eBPF program between several BrandMeister Core instances. XDPHelper is supplied with BrandMeister Core and starts automatically only when required (thanks to systemd and D-BUS activation). By default XDPHelper uses eBPF program ExpressFilter.o (see xdphelper.service).
receiver = "xdp:<interface name>";
receiver = "xdp:eth0";
UDPTX is BrandMeister-own UDP communication library, used to transmit and receive UDP traffic fast. It is very important for BrandMeister to spend less time to send and receive packets, it makes transmission (and finally sound) more smooth.
At this moment BrandMeister provides several backends (options) to send outgoing UDP:
This is standard default backend that uses Berkley sockets for sending a traffic. It tries to send the data in non-blocking mode and has special transmission thread to re-send failed packets.
Passed performance tests to transmit to 5K connections
transmitter = "socket";
This is fast forwarding backend that uses RAW (PACKET_MMAP) socket of Ethernet interface for sending a traffic. It allows to save up to 50% CPU time and has great compatibility.
Passed performance tests to transmit to 20K connections
transmitter = "raw:<interface name>";
transmitter = "raw:eth0";
This is faster forwarding backend that uses AF_XDP socket of Ethernet interface for sending a traffic and in most cases communicates directly with Linux network interface driver.
Passed performance tests to transmit to 30K connections
transmitter = "xdp:<interface name>";
transmitter = "xdp:eth0";
This is fastest forwarding backend that uses kernel-bypass NIC driver for sending a traffic. It allows to save much more CPU time due to direct poll communications to the NIC and CRC offload features of some NIC models. In some tests we got up to 75% acceleration. List of supported NIC models can be found here.
Passed performance tests to transmit to 40K connections
transmitter = "Modules/DPDK-edge.so:<reference interface> <EAL parameters> [--] <module parameters>";
transmitter = "Modules/DPDK-edge.so:eth0 -w 0000:af:00.0 --file-prefix bm --lcores '(0-64)@0' -- -c 1 -q 2048 -b 1 -l 4096";
# /etc/systemd/system/brandmeister@.service.d/override.conf [Service] User=root
All these parameters are optional and override default settings of PMD or DPDK module
(-c) --core-ratio <n> - ratio between NIC queues and DPDK cores (-q) --queue-size <n> - set PMD queue size to <n> slots (instead of automatically generated) (-b) --batch-size <n> - set maximum batch size to <n> slots (instead of automatically generated) (-l) --buffer-length <n> - set workers buffer length to <n> slots (instead of default value of 2048) (-p) --pthresh <n> | (-h) --hthresh <n> | PMD specific threshold values: (-w) --wthresh <n> | https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html#configuration-of-transmit-queues (-r) --rs-thresh <n> | (-f) --free-thresh <n> | (-s) --software-crc - force software CRC calculation
Ixy is very experimental and light user-space network driver. At this moment it supports Intel 82599ES family (aka Intel X520) and virtio. Please read Ixy documentation.
transmitter = "Modules/Dixie.so:[reference interface] <PCI address> <MAC address> [<queue-count> [<buffer-length> <batch-length>]]";
transmitter = "Modules/Dixie.so:eth0 0000:af:00.0 00:1b:21:a5:a0:2c 1 128 32";'
Ixy doesn't provide method to resolve MAC address directly from NIC, so address has to be defined in configuration. We found Ixy has a bug with queue count > 1 on ixgbe, so we do not recommend to use multiple queue.
In reception part UDPTX's driver works in parallel with socket receiver. All it does, is accelerate reception of UDP packets on particular interface.
This is modern method to accelerate UDP reception in BrandMeister Core. It allows to save up to 30% CPU time.
receiver = "Modules/ExpressFilter.o:<interface name>";
receiver = "Modules/ExpressFilter.o:eth0";
This is method is fully the same as eBPF + AF_XDP but uses small additional daemon XDPHelper to load and share eBPF program between several BrandMeister Core instances. XDPHelper is supplied with BrandMeister Core and starts automatically only when required (thanks to systemd and D-BUS activation). By default XDPHelper uses eBPF program ExpressFilter.o (see xdphelper.service).
receiver = "xdp:<interface name>";
receiver = "xdp:eth0";