MultiPath TCP VM and NFV

Many consumer network devices such as smartphones have multiple network connections such as WiFi and 4G LTE. RFC 6824 defines MultiPath TCP (MPTCP) protocol that can enable two devices with multiple network transports to increase their TCP throughput by dividing a single stream of TCP traffic into many MPTCP subflows over the available network transports. The destination MPTCP device will then merge the MPTCP subflows back into a single stream of TCP traffic and forward it to the final application and vice versa. Beside TCP throughput improvement, the use of multiple network transports in MPTCP also improves network resilience between the two TCP devices. While more than 80% of Internet traffic is TCP based, MPTCP has not been very popular due to the fact that most servers on Internet do not support MPTCP. This situation is changing with the support of carriers for the Draft RFC Extensions for Network-Assisted MP-TCP Deployment Models. In a nutshell, the Draft RFC defines a Hybrid Access Gateway (HAG) or MPTCP concentrator in the network to proxy MPTCP traffic for Internet servers. 

The following shows an example of a MPTCP enabled resident broadband deployment where a hybrid MPTCP-capable CPE uses both of its wireline Ethernet and wireless LTE transports to send TCP traffic to a HAG resided in a Telco Cloud.

HAG

The output of the HAG is regular TCP traffic that all Internet servers support. The above HAG deployment is particularly popular in rural areas in Europe when it is not economically to deploy FTTx to replace the age-old DSL but yet, the HAG solution can instantly satisfy EU’s regulatory requirements to offer high-speed (> 30Mbps) residential broadband services by 2020.

For testing Cloud-based MPTCP services (i.e., Network Function Virtualization or NFV) such as HAG with multiple network transports, it is useful to have a simple MPTCP VM resided with the NFV to verify its configuration before attaching the LTE and xDSL networks to the HAG. Interestingly, when I tried to search for a ready-made MPTCP VM to download, I could not find one. I ended up assembling a CentOS-based MPTCP VM that I am going to discuss in this article.

In the above diagram, the Telco Cloud refers to the carrier’s private data center hosting x86 servers for NFV operations. The choice of running NFV on a private/Telco vs. public Clouds such as Azure, or AWS can depend on many factors such as:

  • Security

  • Flexibility to connect NFV services to IP and MPLS networks

  • Capex and Opex. For example, a public cloud’s VM with 8 virtual CPUs, 32GB RAM and Internet access costs about USD 400 a month to lease but the given server can be purchased for only USD 1,800

Many network vendors sell pre-configured Cloud Appliances for running NFV in private/Telco cloud. A typical Cloud Appliance comprises multiple interconnected high-end x86 servers each with multiple Intel’s Xeon CPUs, over 100GB RAM, and high-speed 10Gbps or 40Gbps network ports supporting SR-IOV etc… The Cloud Appliances are also designed in such a way that they can be easily integrated with Cloud orchestration software such as Openstack so that a carrier can easily deploy and scale the Cloud Appliances for different traffic loads etc…..

This article describes the assembly of a MPTCP VM and its integration with MPTCP NFV service for testing. Since we are only concern the functional aspect of MPTCP instead of performance, the platform to run this MPTCP setup can be any KVM Linux PC with 8GB or more RAM. It can also be a KVM VM on a public cloud. In this article, we use a Linux VM running on Microsoft’s Azure Cloud as the platform to assemble the MPTCP VM as well as to integrate and verify the operations of the MPTCP NFV services. Since Azure still offers free trial account, you can follow this article to try out MPTCP NFV testing and KVM integration on Azure for free. 

MPTCP VM Assembly on Azure Cloud

One of the advantages of running Azure Cloud for NFV instead of other public Cloud Service Provider (CSP) is that Azure supports nest-virtualization. As mentioned in my previous article, Extending Private Data Center to Azure Cloud, a lot of network vendors run their NFV services such as HAG, 3GPP SGW and PGW for LTE using Linux KVM. In order to run these NFV service that runs on KVM Hypervisor on a CSP, you need the CSP such as Azure to support nested virtualization. If you have a Linux KVM PC with 8GB or more RAM (e.g., it can be a Linux KVM VM hosted on a Windows VMWare Workstation) , you can test out this MPTCP NFV setup and integration without involving Azure. If not, you can follow the below link to create a Linux KVM VM on Azure as the host for assembling, running and testing MPTCP VMs and services:

https://www.brianlinkletter.com/create-a-nested-virtual-machine-in-a-microsoft-azure-linux-vm/

The Azure compute platform used in this article is Azure’s Standard D8s v3 (8 vcpus, with 32 GB memory). You can choose a smaller Azure’s compute platform as 8GB memory should be enough. The following shows the high-level block diagram of the VMs and bridges that we are going to create on the KVM Hypervisor for MPTCP NFV testing and integration.

mptcp block

Data Port 1 is the Primary MPTCP Path where the MPTCP NFV service (e.g., represent by mptcp-vm2) will use it for negotiating MPTCP capabilities with its peers. MPTCP protocol’s ADD_ADDR messages are also sent over the Primary MPTCP Path to advise peering devices the network links that support MPTCP operations. Data Port 2 is the Secondary MPTCP Path for carrying MPTCP subflows for increasing the aggregated TCP throughput together with the Primary MPTCP Path. It also offers network redundancy when the Primary MPTCP Path fails. MPTCP traffic come in from the two Data Ports and traffic going out of the Internet Gateway is regular TCP traffic and vice versa. 

Instead of connecting a MPTCP traffic generator to the two Data Ports with physical Ethernet cables, we simply create mptcp-vm1 and connect its two Ethernet interfaces to mptcp-vm2 through the two Openvswitches (OVS) bridges, ovs-dp1 and ovs-dp2 respectively. The results we want to demonstrate here is that when iperf3 is run on mptcp-vm1 to send TCP traffic to the Internet Gateway (e.g., 192.0.2.2) on mptcp-vm2, MPTCP signaling and subflows packets are exchanged over the two network links between the two VMs to increase the overall TCP throughput of iperf3 session.  

The following outlines the steps to assemble and integrate the MPTCP VMs for MPTCP NFV testing:

  1. Create two VMs under the KVM Hypervisor and add additional network interfaces to the VMs for MPTCP operations
  2. Create OVS bridges and TAP interfaces on the KVM Hypervisor and bind the TAP interfaces to the VMs’ Ethernet Interfaces
  3. Update the two VMs with the latest MPTCP Linux kernel
  4. Verify MPTCP operations between the two VMs with iperf3

Since we just focus on MPTCP NFV functional testing. OVS bridges are used to connect mptcp-vm2 (representing the MPTCP NFV service) to the two Data Ports. In a production setup using high-speed (10Gbps or 40Gbps) network ports, the setup will involve multiple interconnected x86 servers with load-balancer VMs. Also, high-speed Ethernet cards supporting SR-IOV for PCI pass-through will be used instead of the userspace OVS bridges for performance.

 Step 1 – Create VMs

There are many ways to create VMs to run on a KVM Hypervisor. In my previous article, Docker Container and VM Networking, I used Giovanni’s virt-install-centos script and it is now being updated to kvm-install-vm that can support other Linux distributions beyond CentOS. I have made a few attempts to modify the kvm-install-vm script to generate a VM with multiple network interfaces with static IP without success. Therefore, I used the kvm-install-vm script to create VMs with only one network management interface and then manually added additional network interfaces with static IP to the VMs. 

On the KVM Hypervisor, create two directories ~/virt and ~/virt/images and download the latest CentOS cloud-ready QCOW2 image as follows:

[root@azure-kvm derekc]$ cd ~ 
[root@azure-kvm derekc]$ mkdir virt
[root@azure-kvm derekc]$ cd virt
[root@azure-kvm virt]$ mkdir images
[root@azure-kvm images]$ cd images
[root@azure-kvm images]$ wget https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
--2019-03-21 23:39:03-- https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
Resolving cloud.centos.org (cloud.centos.org)... 38.110.33.4, 2604:4500::3482
Connecting to cloud.centos.org (cloud.centos.org)|38.110.33.4|:443... connected.
HTTP request sent, awaiting response... 200 OK Length: 938409984 (895M)
Saving to: ‘CentOS-7-x86_64-GenericCloud.qcow2’ 100%[========================================================================>] 938,409,984 95.7MB/s in 9.5s 2019-03-21 23:39:13 (94.6 MB/s) - ‘CentOS-7-x86_64-GenericCloud.qcow2’ saved [938409984/938409984]

derekc is my Azure logon ID. sudo to root as we need root privilege for creating TAP network interfaces later. Again, you don’t have to use Azure to run the KVM Hypervisor. It can simply be any KVM Linux PC with 8GB RAM.

Install git on the KVM Hypervisor to clone the kvm-install-vm script as follows:

[root@azure-kvm ~]# yum install git
[root@azure-kvm ~]# git clone https://github.com/giovtorres/kvm-install-vm

Edit the kvm-install-vm script to include password authentication as we need console access to change VM boot-up setting later.Create a test private and public keys pair via ssh-gen and add the test public key onto the kvm-install-vm script.  Also, copy the newly created private key to ~/.ssh/id_rsa for public key VM authentication.

# Configure where output will go output: all: ">> /var/log/cloud-init.log" 
# Derek - set password for the default users, centos for ssh logon
password: abc123
chpasswd: { expire: False }
ssh_pwauth: True


# configure interaction with ssh server ssh_genkeytypes: ['ed25519', 'rsa']
# Install my public ssh key to the first user-defined user configured
# in cloud.cfg in the template (which is centos for CentOS cloud images) ssh_authorized_keys:
# Derek - add my public key for ssh logon
# - ${KEY}
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQ....cL4uXvT derekc@ottawa
timezone: ${TIMEZONE}

The script is now ready to generate the two MPTCP VMs. Note that we call them MPTCP VMs but they are actually not yet ready for MPTCP operations until Step 4.

[root@azure-kvm kvm-install-vm]# ./kvm-install-vm create mptcp-vm1 
- Copying cloud image (CentOS-7-x86_64-GenericCloud.qcow2) ... OK
- Generating ISO for cloud-init ... OK
- Creating storage pool ... OK
- Installing the domain ... OK
- Cleaning up cloud-init files ... OK
- Waiting for domain to get an IP address ... OK
- Checking for 192.168.122.104 in known_hosts file grep: /home/derekc/.ssh/known_hosts: No such file or directory
- No entries found for 192.168.122.14
- SSH to mptcp-vm1: 'ssh centos@192.168.122.14' or 'ssh centos@mptcp-vm1'
- Console at spice://localhost:5900
- DONE

 Run the script again to create mptcp-vm2.

[root@azure-kvm kvm-install-vm]# ./kvm-install-vm create mptcp-vm2 
- Copying cloud image (CentOS-7-x86_64-GenericCloud.qcow2) ... OK
- Generating ISO for cloud-init ... OK
- Creating storage pool ... OK
- Installing the domain ... OK
- Cleaning up cloud-init files ... OK
- Waiting for domain to get an IP address ... OK
- Checking for 192.168.122.252 in known_hosts file
- No entries found for 192.168.122.164
- SSH to mptcp-vm2: 'ssh centos@192.168.122.164' or 'ssh centos@mptcp-vm2'
- Console at spice://localhost:5901
- DONE

 Verify that the two VMs are running on the KVM Hypervisor:

[root@azure-kvm kvm-install-vm]# virsh list --all
Id Name State
----------------------------------------------------
9 mptcp-vm1 running
10 mptcp-vm2 running

The IP addresses 192.168.122.14 (tap0) and 192.168.122.164 (tap1) are the management IP of mptcp-vm1 and mptpc-vm2 respectively. They are connected to the virbr0 Ethernet bridge with the NIC card, virbr0-nic:

[derekc@azure-kvm ~]$ brctl show virbr0
bridge name bridge id STP enabled interfaces
virbr0 8000.525400d28aeb yes tap0
tap1
virbr0-nic

Logon to mptcp-vm1 to add 2 network interfaces, eth1 and eth2. Under the directory /etc/sysconfig/network-scripts, create two files ifcfg-eth1 and ifcfg-eth2 as follows:

[root@mptcp-vm1 ~]# cd /etc/sysconfig/network-scripts/
[root@mptcp-vm1 network-scripts]# cat ifcfg-eth1
BOOTPROTO=static
DEVICE=eth1
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
NM_CONTROLLED=no
HWADDR=52:54:00:ea:01:01
IPADDR=10.10.10.1
PREFIX=29


[root@mptcp-vm1 network-scripts]# cat ifcfg-eth2
BOOTPROTO=static
DEVICE=eth2
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
NM_CONTROLLED=no
HWADDR=52:54:00:ea:01:02
IPADDR=10.10.10.9
PREFIX=29

Repeat the procedures for mptcp-vm2 and add 3 Ethernet interfaces, eth1, eth2 and eth3.

[root@mptcp-vm2 ~]# cd /etc/sysconfig/network-scripts/
[root@mptcp-vm2 network-scripts]# cat ifcfg-eth1
BOOTPROTO=static
DEVICE=eth1
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
NM_CONTROLLED=no
USERCTL=no
HWADDR=52:54:00:ea:02:01
IPADDR=10.10.10.2
PREFIX=29

[root@mptcp-vm2 network-scripts]# cat ifcfg-eth2
BOOTPROTO=none
DEVICE=eth2
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
NM_CONTROLLED=no
USERCTL=no
HWADDR=52:54:00:ea:02:02
IPADDR=10.10.10.10
PREFIX=29

[root@mptcp-vm2 network-scripts]# cat ifcfg-eth3
BOOTPROTO=none
DEVICE=eth3
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
NM_CONTROLLED=no
USERCTL=no
HWADDR=52:54:00:ea:02:03
IPADDR=192.0.2.2
PREFIX=24

Step 2 – TAP interface and VMs’ Ethernet Interface Binding

We have just added additional Ethernet Interfaces onto the two VMs, let us setup two OVS bridges and TAP interfaces for binding the VMs’ Ethernet Interfaces.

[root@azure-kvm ~]# ovs-vsctl add-br ovs-dp1
[root@azure-kvm ~]# ovs-vsctl add-br ovs-dp2
[root@azure-kvm ~]# ifconfig ovs-dp1 up
[root@azure-kvm ~]# ifconfig ovs-dp2 up

On the KVM Hypervisor, create five TAP Interface files, ifcfg-tap2 to ifcfg-tap6 on the directory  /etc/sysconfig/network-scripts  as follows:

[root@azure-kvm network-scripts]# cat ifcfg-tap2
DEVICE=tap2
TYPE=Tap
ONBOOT=yes
NOZEROCONF=yes
BOOTPROTO=none
NM_CONTROLLED=no
MTU=9000
IPV6INIT=no
IPV6_AUTOCONF=no
 

There should be 5 TAP interface config files at the KVM Hypervisor.

[derekc@azure-kvm ~]$ ls /etc/sysconfig/network-scripts/
ifcfg-tap2 ifcfg-tap3 ifcfg-tap4 ifcfg-tap5 ifcfg-tap6

 

Start up the TAP Interfaces and assign them to the two OVS bridges:

[root@azure-kvm ~]# cd /etc/sysconfig/network-scripts/
[root@azure-kvm network-scripts]# ifup ifcfg-tap2
[root@azure-kvm network-scripts]# ifup ifcfg-tap3
[root@azure-kvm network-scripts]# ifup ifcfg-tap4
[root@azure-kvm network-scripts]# ifup ifcfg-tap5
[root@azure-kvm network-scripts]# ifup ifcfg-tap6

[root@azure-kvm ~]# ovs-vsctl add-port ovs-dp1 tap3
[root@azure-kvm ~]# ovs-vsctl add-port ovs-dp1 tap5
[root@azure-kvm ~]# ovs-vsctl add-port ovs-dp1 eth1

[root@azure-kvm ~]# ovs-vsctl add-port ovs-dp2 tap2
[root@azure-kvm ~]# ovs-vsctl add-port ovs-dp2 tap4
[root@azure-kvm ~]# ovs-vsctl add-port ovs-dp2 eth2
 

Note that we are adding eth1 and eth2 of the Data Ports to ovs-dp1 and ovs-dp2 OVS bridge respectively for external connections but as far as the MPTCP NFV testing is concerned, the two OVS bridges and the TAP interfaces are sufficient as the two VMs are connected internally via the OVS bridges. Verify the virtual and physical ports assigned to each OVS bridge:

[root@azure-kvm ~]# ovs-vsctl list-ports ovs-dp1
tap3
tap5
eth1

[root@azure-kvm ~]# ovs-vsctl list-ports ovs-dp2
tap2
tap4
eth2

On the KVM Hypervisor, use virsh edit mptcp-vmx to modify the VM definition files to bind the TAP Interfaces to the VMs’ Ethernet Interfaces.

virsh edit mptcp-vm1

[derekc@azure-kvm ~]$ virsh edit mptcp-vm1
<< skip >>
<interface type='ethernet'>
<mac address='52:54:00:ea:01:01'/>
<target dev='tap3'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
<interface type='ethernet'>
<mac address='52:54:00:ea:01:02'/>
<target dev='tap2'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</interface>

 

virsh edit mptcp-vm2

[derekc@azure-kvm ~]$ virsh edit mptcp-vm2  
<< skip >>
<interface type='ethernet'>
<mac address='52:54:00:ea:02:01'/>
<target dev='tap5'/>
<model type='virtio'/>
<alias name='net5'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
<interface type='ethernet'>
<mac address='52:54:00:ea:02:02'/>
<target dev='tap4'/>
<model type='virtio'/>
<alias name='net5'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</interface>
<interface type='ethernet'>
<mac address='52:54:00:ea:02:03'/>
<target dev='tap6'/>
<model type='virtio'/>
<alias name='net6'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
</interface>

 

Make sure the following VM XML definitions are correct:

  • TAP Interface assignments for each VM match the TAP Interfaces specified on the KVM Hypervisor network block diagram
  • the mac address of each Ethernet Interface definition match the VMs’ Ethernet HWADDR in /etc/sysconfig/network-scripts/ifcfg-ethx
  • Use unassigned slot numbers in the Ethernet Interface definition 

 

Verify that the two VMs can reach each other via the OVS bridges, ovs-dp1 and ovs-dp2 on the KVM Hypervisor.

[root@mptcp-vm2 ~]# ping 10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.885 ms
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.532 ms

[root@mptcp-vm2 ~]# ping 10.10.10.9
PING 10.10.10.9 (10.10.10.9) 56(84) bytes of data.
64 bytes from 10.10.10.9: icmp_seq=1 ttl=64 time=0.896 ms
64 bytes from 10.10.10.9: icmp_seq=2 ttl=64 time=0.491 ms

 

Step 3 – Update VMs with MPTCP Kernel

We have been calling the two VMs as mptcp-vm1 and mptcp-vm2 but they are not yet MPTCP-ready. They are simply standard CentOS VMs created from the CentOS’ cloud-ready image supporting regular TCP protocol in their kernels. We now need to replace their kernels with the latest MPTCP-ready Linux kernel. Follow the below URL to setup yum.repos.d and download the latest stable MPTCP-ready Linux kernel 4.14.70:

http://multipath-tcp.org/pmwiki.php/Users/RPM

If everything is fine, you can now reboot the VMs. When the VMs are booted up, you need to select the MPTCP kernel:

grub

 

When the VMs are booted up, verify their MPTCP readiness by running the following commands:

[centos@mptcp-vm1 ~]# curl http://www.multipath-tcp.org
Yay, you are MPTCP-capable! You can now rest in peace.

[centos@mptcp-vm2 ~]$ curl http://www.multipath-tcp.org
Yay, you are MPTCP-capable! You can now rest in peace.

 

Step 4 – MPTCP VM Integration and NFV Testing

 

We have just created two MPTCP-ready VMs and their Ethernet interfaces are connected together via the OVS bridges. On the two VMs, install iperf3 and tcpdump by running the following commands:

[root@mptcp-vm2 ~]# yum install iperf3
[root@mptcp-vm2 ~]# yum install tcpdump

On mptcp-vm1, add a static route to define the Primary MPTCP Path to the MPTCP NFV service represented by mptcp-vm2:

[root@mptcp-vm1 ~]# route add -net 192.0.2.0/24 gw 10.10.10.2

[root@mptcp-vm1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.122.1 0.0.0.0 UG 0 0 0 eth0
10.10.10.0 0.0.0.0 255.255.255.248 U 0 0 0 eth1
10.10.10.8 0.0.0.0 255.255.255.248 U 0 0 0 eth2
192.0.2.0 10.10.10.2 255.255.255.0 UG 0 0 0 eth1
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

 Verify that mptcp-vm1 can reach the Internet Gateway 192.0.2.2 on mptcp-vm2:

[root@mptcp-vm1 ~]# ping 192.0.2.2 
PING 192.0.2.2 (192.0.2.2) 56(84) bytes of data.
64 bytes from 192.0.2.2: icmp_seq=1 ttl=64 time=2.38 ms
64 bytes from 192.0.2.2: icmp_seq=2 ttl=64 time=0.587 ms

 

If you are at this step, congratulation, your MPTCP setup is ready for the final MPTCP NFV tests. On the KVM Hypervisor, run ssh centos@192.168.122.164 to create 3 SSH sessions to mptcp-vm2 and run the following commands on each session:

  • tcpdump -i eth1 -s 65535 -w mptcp-vm1_eth1.pcapng​
  • tcpdump -i eth2 -s 65535 -w mptcp-vm1_eth2.pcapng​
  • iperf3 -s

On mptcp-vm1, start iperf3 to send TCP traffic to the Internet Gateway 192.0.2.2 on mptcp-vm2:

  • iperf3 –c 192.0.2.2 -t 5
[root@mptcp-vm1 ~]# iperf3 -c 192.0.2.2 -t 5
Connecting to host 192.0.2.2, port 5201
[ 4] local 10.10.10.1 port 55922 connected to 192.0.2.2 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.01 sec 106 MBytes 884 Mbits/sec 0 14.1 KBytes
[ 4] 1.01-2.03 sec 94.3 MBytes 778 Mbits/sec 0 14.1 KBytes
[ 4] 2.03-3.00 sec 85.8 MBytes 738 Mbits/sec 0 14.1 KBytes
[ 4] 3.00-4.01 sec 106 MBytes 885 Mbits/sec 0 14.1 KBytes
[ 4] 4.01-5.01 sec 100 MBytes 844 Mbits/sec 0 14.1 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-5.01 sec 493 MBytes 826 Mbits/sec 0 sender
[ 4] 0.00-5.01 sec 493 MBytes 826 Mbits/sec receiver

 

The following shows the iperf3’s TCP traffic test results on mptcp-vm2:

[root@mptcp-vm2 network-scripts]# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.10.10.1, port 55920
[ 5] local 192.0.2.2 port 5201 connected to 10.10.10.1 port 55922
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 105 MBytes 880 Mbits/sec
[ 5] 1.00-2.00 sec 94.5 MBytes 793 Mbits/sec
[ 5] 2.00-3.00 sec 86.6 MBytes 726 Mbits/sec
[ 5] 3.00-4.00 sec 105 MBytes 886 Mbits/sec
[ 5] 4.00-5.00 sec 101 MBytes 849 Mbits/sec
[ 5] 5.00-5.01 sec 0.00 Bytes 0.00 bits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-5.01 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-5.01 sec 493 MBytes 826 Mbits/sec receiver

 

Let us review the MPTCP packets captured during this iperf3’s TCP test. mptcp-vm2’s eth1, which is the Primary MPTCP Path advises mptcp-vm1’s eth1 using the MPTCP message, ADD_ADDR that it has multiple MPTCP-capable interfaces 10.10.10.2 (Frame 5) and 10.10.10.10 (Frame 6):

mptcp-vm2_eth1_frame5_add_addr_LI.jpg

mptcp-vm2_eth1_frame6_add_addr_LI.jpg

 

 

On mptcp-vm1, it uses the MPTCP message, Join Connection to start MPTCP subflows on the Secondary MPTCP Path over its eth2 interface. mptcp-vm2 receives MPTCP subflows over its Primary (eth1) and Secondary (eth2) MPTCP Paths and it reassemble the MPTCP back to TCP traffic for its Internet Gateway at 192.0.2.2. 

mptcp-vm2_eth2_frame2_join

 

Conclusion

We just show how to assemble and integrate MPTCP VMs for MPTCP NFV testing. MPTCP shield the applications (e.g., iperf3 in this example) from the details of using multiple network links for increasing the aggregated TCP throughput between two devices. mptcp-vm2 represents a scale down version of HAG. For a production HAG deployment in a carrier, it will comprises multiple interconnected high-performance x86 services with load-balancer VMs. Nevertheless, the basic operations of HAG with MPTCP is demonstrated in this article.  

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: