An encounter with hardware offload performance for GENEVE Protocol!!

Objective:


As OpenStack NFV gradually enhanced to adopt the ml2-OVN support for the FDP network, so from the telco application it is necessary to evaluate the network performance with Geneve protocol support. 

Hence we came up with the solution where TRex(Software Trafficgenerator) can emulate Geneve overlay encapsulated packet using VLAN outer header support and process packets using tunnel endpoint via the same broadcast domain where OpenStack compute TenantVLAN traffic is accessible. Geneve encapsulated packets directly reached the TEP interface of Compute(DUT) without redirecting the traffic via the intermediate TEP gateway host. Hence this scenario can reduce the dependencies to manage software GW host and additional packet processing time and process the faster packet to DUT as per the NFV benchmark.


The solution can apply to OpenShift/FDP without Layer products if the underlying configuration matches as explained below (So far it is not tested with the OpenShift environment).


Contributors:

Name

Organisation

Department

Email

Pradipta Sahoo

Red Hat

Performance&Scale

pradiptapks@gmail.com

Haresh Khandelwal

Red Hat

Telco NFV

hareshkhandelwal@gmail.com


Network Topology:

Software Details:

  • OpenStack Cluster:

    • OSP: Red Hat OpenStack Platform release 16.2.0 (Train)

    • OS: Red Hat Enterprise Linux release 8.4

      • Kernel: 4.18.0-305.el8.x86_64

        • # cat /proc/cmdline

BOOT_IMAGE=(hd0,msdos2)/boot/vmlinuz-4.18.0-305.el8.x86_64 root=UUID=3092f72c-9609-48a6-9452-91212b9f3d44 ro console=ttyS0 console=ttyS0,115200n81 no_timer_check crashkernel=auto rhgb quiet default_hugepagesz=1GB hugepagesz=1G hugepages=80 iommu=pt intel_iommu=on isolcpus=6-35 mitigations=off skew_tick=1 nohz=on nohz_full=6-35 rcu_nocbs=6-35 tuned.non_isolcpus=0000003f intel_pstate=disable nosoftlockup

  • Tuned:

    • Current active profile: cpu-partitioning

  • Openvswitch:

    • openvswitch2.15-xxx.el8fdp.x86_64

  • OVN:

    • ovn-2021-host-xxx.el8fdp.x86_64

    • ovn-2021-xxx.el8fdp.x86_64

  • Mellanox Cx5 Firmware Version:

    • 16.29.1016 (MT_0000000013)

  • Trex Server

    • Trex: v2.89

Trex Server Configuration:

  • The Trex server ports act as a tunnel endpoint that transmits and receive encapsulated packets over the L2 network.

  • Configuration Steps:

    1. Create a Trex server config (trex_cfg.yaml) with the same VLAN network (e.,g 172.17.2.0/24). Ensure Trex tunnel IP should not be part of the OpenStack network.

./dpdk_setup_ports.py -c 06:00.0 08:00.0 83:00.0 85:00.0 --cores-include 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55  --ips 172.17.2.160 172.17.2.161 172.17.2.162 172.17.2.163 --def-gws 172.17.2.60 172.17.2.60 172.17.2.60 172.17.2.60 -o /tmp/trex_cfg.yaml

# cat /tmp/trex_cfg.yaml

### Config file generated by dpdk_setup_ports.py ###

- version: 2

  c: 21

  interfaces: ['06:00.0', '08:00.0', '83:00.0', '85:00.0']

  port_bandwidth_gb: 25

  port_info:

   - ip: 172.17.2.160

     default_gw: 172.17.2.60

   - ip: 172.17.2.161

     default_gw: 172.17.2.60


   - ip: 172.17.2.162

     default_gw: 172.17.2.60

   - ip: 172.17.2.163

     default_gw: 172.17.2.60

  platform:

   master_thread_id: 55

   latency_thread_id: 54

   dual_if:

     - socket: 0

       threads: [6,8,10,12,14,16,18,20,22,24,26,34,36,38,40,42,44,46,48,50,52]

     - socket: 1

       threads: [7,9,11,13,15,17,19,21,23,25,27,35,37,39,41,43,45,47,49,51,53]


  1. Execute trex server:

./t-rex-64 -i --checksum-offload --cfg /tmp/trex_cfg.yaml -v 7 


  1. Using Trex service console ping the DUT tenant network IP to ensure accessibility on the layer2 network domain.


trex(service)>ping -d 172.17.2.60 -p 0

Pinging 172.17.2.60 from port 0 with 64 bytes of data:   

Reply from 172.17.2.60: bytes=64, time=0.73ms, TTL=64

Reply from 172.17.2.60: bytes=64, time=0.83ms, TTL=64


trex(service)>ping -d 172.17.2.60 -p 1

Pinging 172.17.2.60 from port 1 with 64 bytes of data:   

Reply from 172.17.2.60: bytes=64, time=0.18ms, TTL=64

Reply from 172.17.2.60: bytes=64, time=1.02ms, TTL=64


trex(service)>ping -d 172.17.2.60 -p 2

Pinging 172.17.2.60 from port 2 with 64 bytes of data:   

Reply from 172.17.2.60: bytes=64, time=0.11ms, TTL=64

Reply from 172.17.2.60: bytes=64, time=0.81ms, TTL=64


trex(service)>ping -d 172.17.2.60 -p 3

Pinging 172.17.2.60 from port 3 with 64 bytes of data:   

Reply from 172.17.2.60: bytes=64, time=0.26ms, TTL=64

Reply from 172.17.2.60: bytes=64, time=0.36ms, TTL=64



Note
:- If the Trex server wants to communicate tunnel endpoint to multiple compute nodes, then the respective tenant network IPs of DUT needs a Trex server port gateway. 

Operator level changes in OVN-Controller:


  • Trex Server ports registration in SouthBound database:
  • Although Traffic generators are usually established outside of the tenant cluster, as per the SDN controller principle, the controller should be recognising of each tunnel endpoint to recognise and compose the logical flows pipeline


  • Hence we registered each Trex server IP in OVN southbound DB individually as a chassis with the GENEVE protocol. Now, the OVN-controller has the intelligence to recognize the encapsulated packets from the Trex tunnel endpoint.

            
  • OVN Commands to register the Trex port in Southbound DB:

# ovn-sbctl chassis-add trex-0000:06:00.0 geneve 172.17.2.160

# ovn-sbctl chassis-add trex-0000:08:00.0 geneve 172.17.2.161

# ovn-sbctl chassis-add trex-0000:83:00.0 geneve 172.17.2.162

# ovn-sbctl chassis-add trex-0000:85:00.0 geneve 172.17.2.163


  • Southbound DB Details:

# ovn-sbctl show

Chassis "trex-0000:83:00.0"

Encap geneve

     ip: "172.17.2.162"

     options: {csum="true"}

Chassis "trex-0000:06:00.0"

Encap geneve

     ip: "172.17.2.160"

     options: {csum="true"}

Chassis "trex-0000:08:00.0"

Encap geneve

     ip: "172.17.2.161"

     options: {csum="true"}

Chassis "trex-0000:85:00.0"

Encap geneve

     ip: "172.17.2.163"

     options: {csum="true"}


  • Tenant network interface mapping with OVN Trex chassis port
  • To create an OVN logical flow pipeline, each chassis in OVN-SBDB usually needs to map to the logical port of the tenant network. So ovn-controller can compose flows with appropriate metadata information including vni_ids and data fields.

  • In the below diagram, there are four OpenStack tenant networks (GENEVE protocols), where associated lines represent the neutron ports mapped to each Trex Chassis port with unique “tunnel_key” of respective tenant networks.

  • Here the neutron port mapping needs to be manually defined as per the use case requirement. Except for the tunnel ID of neutron ports of VM, the associated port of Trex chassis acts as a patch port in the OVS layer to further process the packets from the geneve_sys_xx interface.


  • In the above example, the idea is to distribute the packet from Trex to multiple VFs of TestPMD VMs with different tunnel points using tenant networks.

  • In this way, we can predict how the packets get distributed scale scenarios of TestPMD VM. But we need some level of automation to avoid complex flows. 

  • Logical ports of VMs are configured by ml2-ovn during VM provision and a manual step is required to reserve logical ports from the tenant network for individual Trex endpoint.

  • Create the neutron of each GENEVE tenant network which should be part of the Trex traffic generator test scenario. 

openstack port create --network internal1 --no-security-group --disable-port-security --mac-address fa:16:3e:e4:0a:cf trex-0000:06:00.0-internal1-p1

openstack port create --network internal2 --no-security-group --disable-port-security --mac-address fa:16:3e:20:ab:56 trex-0000:08:00.0-internal2-p1

openstack port create --network internal3 --no-security-group --disable-port-security --mac-address fa:16:3e:a0:1f:28 trex-0000:83:00.0-internal3-p1

openstack port create --network internal4 --no-security-group --disable-port-security --mac-address fa:16:3e:7f:a1:49 trex-0000:85:00.0-internal4-p1

  • List of UUID of the ports

$ openstack port list|grep trex

| 42ac8456-ea83-4b24-8dca-20a4b463c4b8 | trex-0000:83:00.0-internal3-p1 | fa:16:3e:a0:1f:28 | ip_address='192.168.3.173', subnet_id='74a1fffe-7b13-4318-98d2-a4db33ca70ef' | DOWN   |                                                     

| 5e39bcdc-42bc-462b-be74-546efba8220f | trex-0000:06:00.0-internal1-p1 | fa:16:3e:e4:0a:cf | ip_address='192.168.1.61', subnet_id='56530fab-cd43-4562-96fc-b4107c020a44'  | DOWN   |                                                     

| a19c2f7e-829c-46ea-b56e-097fb33b3fec | trex-0000:08:00.0-internal2-p1 | fa:16:3e:20:ab:56 | ip_address='192.168.2.144', subnet_id='56c294c7-3c6d-4dba-b5fb-9f1da5434c77' | DOWN   |                                                     

| d4e1eab2-c943-430c-8526-8025a1e1218b | trex-0000:85:00.0-internal4-p1 | fa:16:3e:7f:a1:49 | ip_address='192.168.4.31', subnet_id='c2fe1610-f279-4784-9bb3-db50f6e39d1c'  | DOWN   |                                                      

  • Bind uuid of logical port to individual Trex Chassis as part of identical traffic communication

# ovn-sbctl lsp-bind 5e39bcdc-42bc-462b-be74-546efba8220f trex-0000:06:00.0

# ovn-sbctl lsp-bind a19c2f7e-829c-46ea-b56e-097fb33b3fec trex-0000:08:00.0

# ovn-sbctl lsp-bind 42ac8456-ea83-4b24-8dca-20a4b463c4b8 trex-0000:83:00.0

# ovn-sbctl lsp-bind d4e1eab2-c943-430c-8526-8025a1e1218b trex-0000:85:00.0

  • OVN-SBDB view of Trex chassis with tenant network port bind:

# ovn-sbctl show

Chassis "trex-0000:06:00.0"

    Encap geneve

        ip: "172.17.2.160"

        options: {csum="true"}

    Port_Binding "5e39bcdc-42bc-462b-be74-546efba8220f"

Chassis "trex-0000:08:00.0"

    Encap geneve

        ip: "172.17.2.161"

        options: {csum="true"}

    Port_Binding "a19c2f7e-829c-46ea-b56e-097fb33b3fec"

Chassis "trex-0000:83:00.0"

    Encap geneve

        ip: "172.17.2.162"

        options: {csum="true"}

    Port_Binding "42ac8456-ea83-4b24-8dca-20a4b463c4b8"

Chassis "trex-0000:85:00.0"

    Encap geneve

        ip: "172.17.2.163"

        options: {csum="true"}

    Port_Binding "d4e1eab2-c943-430c-8526-8025a1e1218b"


  • After the port the ports are mapped to OVS integration bridge (br-int) as ovn patch ports.

# ovs-ofctl show br-int |grep -A3 ovn-perf12 

 62(ovn-perf12-0): addr:06:ae:7b:e1:1a:8f

     config:     0

     state:      0

     speed: 0 Mbps now, 0 Mbps max

 63(ovn-perf12-1): addr:8e:b8:9b:46:48:09

     config:     0

     state:      0

     speed: 0 Mbps now, 0 Mbps max

 64(ovn-perf12-2): addr:7a:3b:0d:10:e9:79

     config:     0

     state:      0

     speed: 0 Mbps now, 0 Mbps max

 65(ovn-perf12-3): addr:26:db:3d:3c:a9:29

     config:     0

     state:      0

     speed: 0 Mbps now, 0 Mbps max

  • Tunnel ID and Key Information of Datapath and Port in OVN-SBDB
  • As previously mentioned, the OVN metadata carried the VNI ID of the tenant network and Data_fields of source and destination port information of each encap packet.

  • So it is required to collect the tunnel ID and key information of the Tenant network and port which has been binded to the Southbound chassis.

  • Tunnel Key each Tenant Networks which part of Performance test

# ovn-sbctl --bare --columns tunnel_key list Datapath_binding internal1

3

# ovn-sbctl --bare --columns tunnel_key list Datapath_binding internal2

4

# ovn-sbctl --bare --columns tunnel_key list Datapath_binding internal3

5

# ovn-sbctl --bare --columns tunnel_key list Datapath_binding internal4

6

  • Map tunnel_key of the logical port to respective individual Trex chassis to establish identical traffic communication between Trex endpoint and VF of VMs.

# ovn-sbctl --bare --columns tunnel_key list port_binding 5e39bcdc-42bc-462b-be74-546efba8220f

3

# ovn-sbctl --bare --columns tunnel_key list port_binding a19c2f7e-829c-46ea-b56e-097fb33b3fec

3

# ovn-sbctl --bare --columns tunnel_key list port_binding 42ac8456-ea83-4b24-8dca-20a4b463c4b8

3

# ovn-sbctl --bare --columns tunnel_key list port_binding d4e1eab2-c943-430c-8526-8025a1e1218b

3

  • Tunnel Key of DataPath ports that belong to Compute Chassis including VM1 and VM2.

$ openstack server list

+--------------------------------------+----------+--------+-----------------------------------------------------------------------+---------+--------+

| ID                                   | Name     | Status | Networks                                                              | Image   | Flavor |

+--------------------------------------+----------+--------+-----------------------------------------------------------------------+---------+--------+

| 206921b5-bc26-434a-b069-23fa405402cb | TestPMD2 | ACTIVE | external=10.16.31.213; internal3=192.168.3.16; internal4=192.168.4.16 | rhel8.4 |        |

| 33c6652f-3c99-44a5-ba43-8af95be56a86 | TestPMD1 | ACTIVE | external=10.16.31.232; internal1=192.168.1.16; internal2=192.168.2.16 | rhel8.4 |        |

+--------------------------------------+----------+--------+-----------------------------------------------------------------------+---------+--------+

$ nova interface-list TestPMD1

+------------+--------------------------------------+--------------------------------------+--------------+-------------------+-----+

| Port State | Port ID                              | Net ID                               | IP addresses | MAC Addr          | Tag |

+------------+--------------------------------------+--------------------------------------+--------------+-------------------+-----+

| ACTIVE     | 16b47f74-04cb-4a9c-8be2-2dbff5734fb2 | d537902b-c4c7-47f0-935b-e5f1a57fcf52 | 10.16.31.232 | fa:16:3e:e8:33:ed | -   |

| ACTIVE     | 4bdd2ce1-0823-4ca9-b0e5-298dd039f853 | a6fe8427-362b-4e79-bb00-dd05efa0f839 | 192.168.1.16 | fa:16:3e:c6:b0:38 | -   |

| ACTIVE     | 69a7c53c-9107-4a7f-80f5-934481e7bd17 | 0e5551c1-5f27-43fe-82a0-b24b0f70bae8 | 192.168.2.16 | fa:16:3e:f2:9c:b9 | -   |

+------------+--------------------------------------+--------------------------------------+--------------+-------------------+-----+

$ nova interface-list TestPMD2

+------------+--------------------------------------+--------------------------------------+--------------+-------------------+-----+

| Port State | Port ID                              | Net ID                               | IP addresses | MAC Addr          | Tag |

+------------+--------------------------------------+--------------------------------------+--------------+-------------------+-----+

| ACTIVE     | 10c1588b-fb9b-49a5-8391-8414eb0a6c2f | d537902b-c4c7-47f0-935b-e5f1a57fcf52 | 10.16.31.213 | fa:16:3e:63:a6:cd | -   |

| ACTIVE     | b98b8810-7670-4464-9490-ce6affdfdd00 | 6e7b12f9-6328-4b25-aba1-92300f6479cc | 192.168.4.16 | fa:16:3e:53:9d:9f | -   |

| ACTIVE     | d976fba4-1a17-4bf6-83a4-f360f24b4f00 | 9c0b51e2-004c-4bf4-a2c6-636d7518ec21 | 192.168.3.16 | fa:16:3e:d5:9e:c1 | -   |

+------------+--------------------------------------+--------------------------------------+--------------+-------------------+-----+


  • OVN port bind information of OpenStack VMs.

# ovn-sbctl show

Chassis "d8f310a7-3a30-483d-96d8-84c2ba87f0e1"

    hostname: nfv-controller-0.localdomain

    Encap geneve

        ip: "172.17.2.54"

        options: {csum="true"}

Chassis "effc6016-b815-45f9-bbf9-b5bc526ffe1d"

    hostname: nfv-compute-0.localdomain

    Encap geneve

        ip: "172.17.2.60"

        options: {csum="true"}

    Port_Binding "10c1588b-fb9b-49a5-8391-8414eb0a6c2f"

    Port_Binding "4bdd2ce1-0823-4ca9-b0e5-298dd039f853"

    Port_Binding "16b47f74-04cb-4a9c-8be2-2dbff5734fb2"

    Port_Binding "d976fba4-1a17-4bf6-83a4-f360f24b4f00"

    Port_Binding "b98b8810-7670-4464-9490-ce6affdfdd00"

    Port_Binding "69a7c53c-9107-4a7f-80f5-934481e7bd17"


  1. VM1 Ports:

# ovn-sbctl --bare --columns tunnel_key list port_binding 4bdd2ce1-0823-4ca9-b0e5-298dd039f853

2

# ovn-sbctl --bare --columns tunnel_key list port_binding 69a7c53c-9107-4a7f-80f5-934481e7bd17

2

  1. VM2 Ports:

# ovn-sbctl --bare --columns tunnel_key list port_binding d976fba4-1a17-4bf6-83a4-f360f24b4f00

2

# ovn-sbctl --bare --columns tunnel_key list port_binding b98b8810-7670-4464-9490-ce6affdfdd00

2

  • As the above information is most important the Trex traffic profile required the tunnel_key while building the scapy packet builder. 

  • To view the encapsulated GENEVE packets, refer below TCPDUMP example of ingress and egress packets that carry the tunnel key of tenant network (VNI ids) and datapath fields with source and destination ports.

    1. Ingress Geneve Encap Packet on DUT: 

14:52:42.056228 40:a6:b7:0b:e9:b0 > 0c:42:a1:d1:da:98, ethertype 802.1Q (0x8100), length 96: vlan 304, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto UDP (17), length 78)

172.17.2.160.57025 > 172.17.2.61.geneve: [udp sum ok] Geneve, Flags [none], vni 0x3, proto TEB (0x6558), options [class Open Virtual Networking (OVN) (0x102) type 0x80(C) len 8 data 00030002]

     fa:16:3e:eb:75:7f > fa:16:3e:36:6c:13, ethertype IPv4 (0x0800), length 34: IP7


  1. Egress Geneve ENCAP Packet on DUT: 

14:52:42.056252 0c:42:a1:d1:da:98 > 40:a6:b7:0b:e9:b1, ethertype 802.1Q (0x8100), length 122: vlan 304, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 34993, offset 0, flags [DF], proto UDP (17), length 104)

172.17.2.60.54989 > 172.17.2.161.geneve: [udp sum ok] Geneve, Flags [C], vni 0x4, proto TEB (0x6558), options [class Open Virtual Networking (OVN) (0x102) type 0x80(C) len 8 data 00020003]

     fa:16:3e:e4:1d:b1 > fa:16:3e:0c:af:0e, ethertype IPv4 (0x0800), length 60: IP7


  • Trex StateLess Traffic Profile Configuration:
  • As STL framework leverages to support all the packet in scapy packet builder. We noticed the latest version of Trex has an external Geneve library to use for the scapy packet builder. However, the existing GENEVE module has issues with the GENEVE header which is not formatted as per OVN requirement. We customized GENEVE classes to support GENEVE metadata as per OVN compatibility. Later we submitted the scapy patch in upstream.

from scapy.contrib.geneve import GENEVE

(external_libs/scapy-2.4.3/scapy/contrib/geneve.py)

         packet =

                STLPktBuilder(

                    pkt = 

Ether(dst='3c:fd:fe:ee:4a:2c',src='ec:0d:9a:44:2c:f5')/

Dot1Q(vlan=177,type='IPv4')/

IP(proto=17,src='172.17.2.160',dst='172.17.2.60')/

UDP(sport=57025,dport=6081)/

GENEVE(optionlen=2,proto=0x6558,vni=3,options_class=0x0102,options_type=0x80,options_len=1,options_data=b"00030002")/

Ether(src='fa:16:3e:e4:0a:cf',dst='fa:16:3e:20:ab:56',type=0x0800)/IP()/UDP(dport=1000,sport=2000)

                            ),

  • As a functional validation, we created four traffic profile scripts where each script was defined with a dedicated scapy header.

# grep "base_pkt =" stl/geneve_p*.py

stl/geneve_p0.py:    base_pkt =  Ether()/Dot1Q(vlan=177,type='IPv4')/IP(proto=17,src='172.17.2.160',dst='172.17.2.60')/UDP(sport=57025,dport=6081)/GENEVE(optionlen=2,proto=0x6558,vni=3,options_class=0x0102,options_type=0x80,options_len=1,options_data=b"\x00\x03\x00\x02")/Ether(src='fa:16:3e:e4:0a:cf',dst='fa:16:3e:20:ab:56',type=0x0800)/IP()/UDP(dport=1000,sport=2000)

stl/geneve_p1.py:    base_pkt =  Ether()/Dot1Q(vlan=177,type='IPv4')/IP(proto=17,src='172.17.2.161',dst='172.17.2.60')/UDP(sport=57025,dport=6081)/GENEVE(optionlen=2,proto=0x6558,vni=4,options_class=0x0102,options_type=0x80,options_len=1,options_data=b"\x00\x03\x00\x02")/Ether(src='fa:16:3e:20:ab:56',dst='fa:16:3e:e4:0a:cf',type=0x0800)/IP()/UDP(dport=1000,sport=2000)

stl/geneve_p2.py:    base_pkt =  Ether()/Dot1Q(vlan=177,type='IPv4')/IP(proto=17,src='172.17.2.162',dst='172.17.2.60')/UDP(sport=57025,dport=6081)/GENEVE(optionlen=2,proto=0x6558,vni=5,options_class=0x0102,options_type=0x80,options_len=1,options_data=b"\x00\x03\x00\x02")/Ether(src='fa:16:3e:a0:1f:28',dst='fa:16:3e:7f:a1:49',type=0x0800)/IP()/UDP(dport=1000,sport=2000)

stl/geneve_p3.py:    base_pkt =  Ether()/Dot1Q(vlan=177,type='IPv4')/IP(proto=17,src='172.17.2.163',dst='172.17.2.60')/UDP(sport=57025,dport=6081)/GENEVE(optionlen=2,proto=0x6558,vni=6,options_class=0x0102,options_type=0x80,options_len=1,options_data=b"\x00\x03\x00\x02")/Ether(src='fa:16:3e:7f:a1:49',dst='fa:16:3e:a0:1f:28',type=0x0800)/IP()/UDP(dport=1000,sport=2000)


  • For a sanity test, start the traffic from Trex with respective port

start -f ./stl/geneve_p0.py -m 50% --force --port 0 -t fsize=128

start -f ./stl/geneve_p1.py -m 50% --force --port 1 -t fsize=128

start -f ./stl/geneve_p2.py -m 50% --force --port 2 -t fsize=128

start -f ./stl/geneve_p3.py -m 50% --force --port 3 -t fsize=128



  • End to End Packet Journey in PVP Scenario:
  • The below samples are collected from the OVS kernel datapath to review the traffic journey from Trex to DUT in the PVP scenario. Referring to the below sample diagram, once the traffic profile is initiated with appropriate scapy headers in the Trex server, in the DUT it process through the respective hubs to dcap and encap

  • In the NFV baseline test, Trex can start the traffic with a minimum frame size of 128byte because the overall Geneve encap packet consumes a 104byte length.

Outer Ethernet Header:   14 bytes

Outer VLAN  Header:       4 bytes

Outer IP  Header:        20 Bytes

UDP Header:               8 Bytes

Geneve  Header:          16 Bytes

Internal Ether Header:   14 Bytes

Internal IP Header:      20 Bytes

Payload UDP:              8 Bytes

               -----------------

               Total= 104 Bytes

  • Performance Results: 

    • Mellanox ConnectX-5 Ex 100GbE Throughput with 50% line rate.

Frame Size (Bytes)

Duration
(min)


Traffic
Mode

Traffic

Total_Tx_L1 (Gbps)

Total_Rx (Gbps)

Total_Tx_Rates (Mpps)

Total_Rx_pps (Mpps)

Cpu_Util 

(%)

Drop_rate (Gbps)

Queue_full (pkts)

128

10

PVP

Bi-Direction

50.02

41.91

42.25

42.25

8.12

0

0

256

10

PVP

Bi-Direction

50.05

50.05

22.67

22.67

4.37

0

0

512

10

PVP

Bi-Direction

50.11

47.85

11.77

11.77

2.24

0

0

1024

10

PVP

Bi-Direction

50

48.85

5.99

5.99

1.02

0

0

1500

10

PVP

Bi-Direction

50.06

49.3

4.12

4.12

4.67

0

0

9000

10

PVP

Bi-Direction

50.01

49.88

693.07 Kpps

693.07 Kpps

0.17

0

0

  • RFE:
  • Conclusion:
    • Achieved linerate utilisation without an intermediate host (ovn-gw) to achieve the linerate utilisation.
    • Using the individual Trex traffic profile, the test scenario can be implemented with a scale scenario of East-West and North-South scenarios.
    • No changes are required in OpenStack Compute and TestPMD.  Can be used with others Infrastructure i.e Openshift
    • Very minimal operator level changes in ovn-controller suffice the tunnel requirement
    • Next Goal:
      • Integration with Binary-search algorithm
      • Revalidate with Connection tracking hw-offload.

Comments