End-to-End Network Service QoS – Part 2

In Part 1 of this article, I talked about how to design access QoS with sap-ingress and sap-egress QoS constructs from Nokia’s Service Router for a coast-to-coast ePipe MPLS network service to support triple-play traffic. I showed how to classify the triple-play traffic at the ePipe’s sap-ingress and map the traffic onto different Forwarding Classes (FC) for the Priority Scheduler. Different Nokia Service Router’s QoS constructs such as ip-criteria, queue, FC, CIR, and PIR etc.. are used at the sap-ingress 50 and sap-egress 50 according to the QoS requirements of the ePipe’s triple-play traffic. The article covered the basic QoS processes such as Classification, Queuing and Scheduling. Due to the lenght of the article, I left the final QoS process, Marking/Remarking in Part 2 of this article. In addition to the Priority Scheduler covered in Part 1, we will discuss another popular traffic scheduler, Hierarchical Scheduler and compare the usages of the two traffic schedulers.

Again, I will use Nokia’s Service Router to illustrate the QoS process, Marking/Remarking and the Hierarchical Scheduler to complete the end-to-end network servic QoS design for our ePipe example. Nevertheless, since all the QoS constructs I covered in these two articles are very basic QoS constructs and design, readers should be able to apply this end-to-end network service QoS design example on Nokia’s Service Router for other routers.

 

Hierarchical Scheduler

The Priority Scheduler introduced in Part 1 of this article is the most basic traffic scheduler where traffic scheduling priority is based on the priority of the FCs followed by the CIR vs. PIR. It does not offer any bandwidth sharing among different traffic types or applications. For example, if there is no voice nor video traffic at a particular moment, the combined 5Mbps idled voice and video bandwidth cannot be used by data traffic, if needed. Hierarchical Scheduler supports Hierarchical QoS (HQoS) to offer bandwidth sharing among  different traffic types. 

Let us modify the triple-play traffic QoS requirements of the ePipe network service in Part 1 of this article such that the combined 10Mbps triple-play traffic bandwidth can be shared as follows:

Hierarchical scheduler

The tier-1 root scheduler has a maximum bandwidth of 10Mbps that can be shared by its 3 children, Voice, Video and Data. In Nokia’s Service Router, a Hierarchical Scheduler serves traffic from the highest to the lowest cir-level for CIR traffic. When all CIR traffic is served (i.e., forwarded to the switching fabic to the network egress port), the Hierarchical Scheduler will offer the remaining bandwidth for PIR traffic starting again from the highest to the lowest level (i.e., pir-level). For example, for a given second, if the voice traffic (highest cir-level 5) has only 100Kbps packets in Queue 6 awaiting for transmission, the Hierarchical Scheduler will serve this 100Kbps voice traffic as In-Profile (i.e., guaranteed traffic that will not drop under switch congestion) traffic. It will then serve the video’s CIR traffic (next highest cir-level 4) with the remaining 9.9Mbps bandwidth. Let say the video traffic has 4Mbps packets in Queue 5 awaiting for transmission, the Hierarchical Scheduler will forward 3.5Mbps of the video traffic as In-Profile traffic. This leaves 6.4Mbps bandwidth for PIR traffic. The video traffic still has 0.5Mbps of packets remained in Queue 5 and this remaining 0.5bps video traffic will be served as Out-Profile traffic that can be subjected for discard if there are any switch or link congestions. This leaves 5.9Mbps bandwidth available for Data traffic, which has the lowest PIR level.

Hierarchical Scheduler Policy

The following shows the configuration of the Hierarchical Scheduler to support the new bandwidth sharing QoS requirements of the ePipe. The tier 1 scheduler, root has 10Mbps bandwidth to be shared by its 3 children or the triple-play traffic. Multi-tier hierarchical QoS (HQoS) is supported in Nokia’s Service Router for refined HQoS operations but for our new ePipe’s QoS requirements, a single scheduling tier is enough.

A:PE1>config>qos# scheduler-policy "hQoS10" 
A:PE1>config>qos>scheduler-policy# info 
----------------------------------------------
            tier 1
                scheduler "root" create
                    rate 10000 cir 10000
                exit
            exit

sap-ingress 80

A new sap-ingress 80 is created to support HQoS operations. Note the definitaion of parent “root” in each sap-ingress’ queue for bandwidth sharing.

A:PE1>config>qos>sap-ingress 80
A:PE1>config>qos>sap-ingress# info 
----------------------------------------------
            description "HQoS Voice EF Q6, Video H2 Q5 & Data BE Q1"
            queue 1 create
                parent "root"
            exit
            queue 5 create
                parent "root" level 4 cir-level 4
                rate 4500 cir 3500
                mbs 22 kilobytes
                burst-limit 120000 bytes
            exit
            queue 6 create
                parent "root" level 5 cir-level 5
                rate 500 cir 500
            exit
            queue 11 multipoint create
            exit
            fc "ef" create
                queue 6
            exit
            fc "h2" create
                queue 5
            exit
            ip-criteria
                entry 10 create
                    match protocol udp
                        dst-port eq 1234
                    exit
                    action fc "ef" priority high
                exit
                entry 20 create
                    match protocol udp
                        dst-port eq 6789
                    exit              
                    action fc "h2"
                exit
            exit

sap-egress 80

A new sap-egress 80 is created to support HQoS operations as well. Again, the parent “root” in defined in each sap-egress’ queue for bandwidth sharing.

A:PE1>config>qos# sap-egress 80 
*A:PE1>config>qos>sap-egress# info 
----------------------------------------------
            description "HQoS Voice EF Q6, Video H2 Q5 & Data BE Q1"
            queue 1 create
                parent "root" cir-level 1
                rate 10000
            exit
            queue 5 create
                parent "root" level 4 cir-level 4
                rate 4500 cir 3500
                burst-limit 120000 bytes
            exit
            queue 6 create
                parent "root" level 5 cir-level 5
                rate 500 cir 500
            exit
            fc be create
                dot1p 0
            exit 
            fc ef create
                queue 6
                dot1p 7
            exit 
            fc h2 create
                queue 5
                dot1p 4
            exit 

We can now apply the newly created single tier Hierarchical Scheduler, root, sap-ingress 80 and sap-egress 80 to replace the Priority Scheduler for our ePipe network service.

A:PE1>config>service# epipe 1 
A:PE1>config>service>epipe# info 
----------------------------------------------
            sap 2/1/10 create
                ingress
                    scheduler-policy "hQoS10"
                    qos 80
                exit
                egress
                    scheduler-policy "hQoS10"
                    qos 80
                exit
            exit
            spoke-sdp 2:100 create
                no shutdown
            exit
            no shutdown

 

HQoS Verification

On the PC connecting to PE1’s sap 2/1/10 of the ePipe, run the following iperf script to generate 10Mbps traffic using the default UDP port 5001. Since the UDP port is not 1234 or 6789, this 10Mbps traffic will be considered as data traffic as far as ip-criteria in sap-ingress 80 is concerned.

iperf -u -c 192.168.1.15 -b 10M -t 20 &

On the PC connected to PE2’s sap 2/1/10 of the other end of the ePipe, run the following iperf receive script.

ifperf -u -s

The following shows the queue statistic of sap-ingress 2/1/10 of PE1 after the 20 seconds iperf transmission.

A:PE1# clear service statistics sap 2/1/10 all 
A:PE1# show service id 1 sap 2/1/10 detail
<< skip >>
-------------------------------------------------------
Sap per Queue stats
-------------------------------------------------------
                        Packets                 Octets 

Ingress Queue 1 (Unicast) (Priority)
Off. HiPrio           : 0                       0                        
Off. LowPrio          : 17011                   25785772                 
Dro. HiPrio           : 0                       0                        
Dro. LowPrio          : 0                       0                   
For. InProf           : 0                       0                        
For. OutProf          : 17011                   25785772 
 
Ingress Queue 5 (Unicast) (Priority)
Off. HiPrio           : 0                       0                        
Off. LowPrio          : 0                       0                        
Dro. HiPrio           : 0                       0                        
Dro. LowPrio          : 0                       0                        
For. InProf           : 0                       0                        
For. OutProf          : 0                       0                        
 
Ingress Queue 6 (Unicast) (Priority)
Off. HiPrio           : 0                       0                        
Off. LowPrio          : 0                       0                        
Dro. HiPrio           : 0                       0                        
Dro. LowPrio          : 0                       0                        
For. InProf           : 0                       0                        
For. OutProf          : 0                       0

The following shows the queue statistics of PE2’s sap 2/1/10, which is the other end of the ePipe. All the 10Mbps offered iperf data traffic (i.e., 17011 packets) are received by PE2 over the ePipe.

B:PE2# clear service statistics sap 2/1/10 all 
B:PE2# show service id 1 sap 2/1/10 detail
<< skip >> 
--------------------------------------------------------
Sap per Queue stats
--------------------------------------------------------
                        Packets                 Octets 

Egress Queue 1
For. InProf           : 0                       0                        
For. OutProf          : 17011                   25785772 
Dro. InProf           : 0                       0                        
Dro. OutProf          : 0                       0                        
 
Egress Queue 5
For. InProf           : 0                       0                        
For. OutProf          : 0                       0                        
Dro. InProf           : 0                       0                        
Dro. OutProf          : 0                       0                        
 
Egress Queue 6
For. InProf           : 0                       0                        
For. OutProf          : 0                       0                        
Dro. InProf           : 0                       0                        
Dro. OutProf          : 0                       0

When there is no video or voice traffic, all the 10Mbps offered data traffic can be delivered over the ePipe as the Hierarchical Scheduler offers all its 10Mbps bandwidth  for Queue 1 for the Data traffic.

The following shows the iperf script that generates 10Mbps data traffic over the ePipe service between PE1 and PE2. The iperf server reports link bandwidth of 9.72Mbps with 0 packet loss.

[root@student1 ~]# ./tx_only_data 
------------------------------------------------------------
Client connecting to 192.168.1.15, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  122 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.9 port 54516 connected with 192.168.1.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-20.0 sec  23.8 MBytes  10.0 Mbits/sec
[  3] Sent 17008 datagrams
[  3] Server Report:
[  3]  0.0-20.6 sec  23.8 MBytes  9.71 Mbits/sec   0.043 ms    0/17007 (0%)
[  3]  0.0-20.6 sec  1 datagrams received out-of-order

 

Network-Ingress and Network-Egress QoS Policies

In order to have a successful end-to-end network QoS design for network services, beside the Access QoS polices such as sap-ingress and sap-egress that we have covered so far, we need to address the Network QoS policies as they are in the datapath of the network services.

Let us show the end-to-end QoS network diagram we used in Part 1 of this article to illustrate that Access and Network QoS policies form the complete end-to-end network service QoS design.

end-to-end QoS

All the Intra-Switch QoS constructs such as ip-criteria, queues, fc, cir, pir etc… are vendor-specific and affect (e.g., forward, drop, delay etc..) traffic inside a switch only. Since network service QoS is end-to-end, we need to ensure all inter-connected routers from the same or different vendors can understand the QoS requirements of the network traffic. QoS Marking/Remarking can be used to “summarize” the QoS requirements of the traffic for Inter-Switch QoS design such as:

  • MPLS inter-router network link
    • MPLS EXP
  • IP inter-router network link
    • IP DSCP
  • Ethernet inter-router network link
    • Ethernet 802.1p

The following shows the default Forwarding Class to Network QoS Marking (i.e., network 1) used in Nokia’s Service Routers:

network 1’s egress

A:PE1>config>qos# network 1 
*A:PE1>config>qos>network# info detail 
----------------------------------------------
            << skip >>
            egress
 	    << skip >>
                fc be
                    dscp-in-profile be
                    dscp-out-profile be
                    lsp-exp-in-profile 0
                    lsp-exp-out-profile 0
                    dot1p-in-profile 0
                    dot1p-out-profile 0
                    no de-mark
                    no port-redirect-group
                exit
                fc ef
                    dscp-in-profile ef
                    dscp-out-profile ef
                    lsp-exp-in-profile 5
                    lsp-exp-out-profile 5
                    dot1p-in-profile 5
                    dot1p-out-profile 5
                    no de-mark
                    no port-redirect-group
                exit
                fc h2
                    dscp-in-profile af41
                    dscp-out-profile af42
                    lsp-exp-in-profile 4
                    lsp-exp-out-profile 4
                    dot1p-in-profile 4
                    dot1p-out-profile 4
                    no de-mark
                    no port-redirect-group
                exit
                
            exit

From the default network-egress of network 1 , the following FC – Network QoS Marking mapping can be determined for our triple-play ePipe traffic when MPLS network links are used to interconnect PE1 – P – PE2:

  • Data
    • FC BE ⇒ MPLS EXP 0
  • Video
    • FC H2 ⇒ MPLS EXP 4
  • Voice
    • FC EF ⇒ MPLS EXP 5

network 1’s ingress

Similarly, the network QoS Marking to FC mapping is performed on nework 1’s ingress configuration where the network MPLS EXP bits are mapped to the corresponding FCs inside the PE and P routers.

*A:PE1>config>qos# network 1 
*A:PE1>config>qos>network# info detail 
----------------------------------------------
            description "Default network QoS policy."
            scope template
            ingress
                default-action fc be profile out
                no ler-use-dscp
            << skip >>
                lsp-exp 0 fc be profile out
                lsp-exp 1 fc l2 profile in
                lsp-exp 2 fc af profile out
                lsp-exp 3 fc af profile in
                lsp-exp 4 fc h2 profile in
                lsp-exp 5 fc ef profile in
                lsp-exp 6 fc h1 profile in
                lsp-exp 7 fc nc profile in
	    << skip >>
            exit

Let us walk through an example to see how voice traffic is being mapped from PE1 in Vancouver to PE2 in Toronto in terms of FC and network QoS Marking/Remarking. When voice traffic is offered to sap 2/1/10 of PE1, it is mapped to FC EF by sap-ingress 80′ ip-criteria. Voice traffic is then processed as the highest priority traffic among the triple-play traffic on the ePipe inside PE1. The voice traffic is marked as MPLS EXP 5 by network 1’s network-egress policy when the voice traffic is sent from PE1 to the downstream P routers. The voice traffic is again mapped to internal FC EF traffic inside the P router among with other multi-terabits network traffic offered to the P router.  Subsequently, the voice traffic is again marked as MPLS EXP 5 and is sent by P to PE2 routers. PE2 understands that the voice traffic is non-transit traffic and it then forwards the voice traffic to its sap 2/1/10 with Ethernet dot1q 7 marking according to sap-egress 80 before the voice traffic is delivered to the Toronto office.

Re-mapping Network Markings

In the above ePipe example, we show only one P router between the carrier’s Vancouver and Toronto Central Offices. In an actual deployment, the triple-play traffic is likely going through multiple P routers inside the carrier’s core network. A lot of time, carriers have their own QoS marking standard for different traffic types that may not be compatible with a vendor’s default QoS Marking scheme. Below shows an example where a carrier has standardized MPLS EXP 3 Marking for video traffic. This is different from the default MPLS EXP 4 Marking for H2 traffic used by the ePipe service for video traffic in Nokia’s default network 1 policy.

The following shows the new network QoS marking scheme of the carrier:

  • Data
    • FC BE – MPLS EXP 0
  • Video
    • FC H2 – MPLS EXP 3
  • Voice
    • FC EF – MPLS EXP 5

In order to support this new network QoS Marking requirement, we will develop a new network 20 QoS policy. First of all, create network 20 QoS based on the default network 1 QoS policies so that we can focus only on the affected section of the QoS config without re-inventing the whole wheel unnecessarily.

*A:PE1>config>qos# copy network 1 20 
*A:PE1>config>qos# network 20 

Remap QoS markings FC H2 to MPLS EXP 3 and vice versa for both ingress and egress directions:

*A:PE1>config>qos>network# info 
----------------------------------------------
            description "Default network QoS policy."
            ingress
            << skip >>
                lsp-exp 0 fc be profile out
                lsp-exp 1 fc l2 profile in
                lsp-exp 2 fc af profile out
                lsp-exp 3 fc h2 profile in
                lsp-exp 4 fc h2 profile in
                lsp-exp 5 fc ef profile in
                lsp-exp 6 fc h1 profile in
                lsp-exp 7 fc nc profile in
            exit
            egress
                remarking
                fc h2
                    lsp-exp-in-profile 3
                    lsp-exp-out-profile 3
                exit
            exit

Apply the new network 20 QoS Marking/Remarking policy to all network-ingress and network-egress of all routers:

PE1

*A:PE1>config>router# info 
----------------------------------------------
        interface "toP"
            address 10.1.3.1/24
            port 2/1/3
            qos 20
            no shutdown
        exit

P

*A:P>config>router# info 
----------------------------------------------
        interface "system"
            address 10.10.10.3/32
            no shutdown
        exit
        interface "toPE1"
            address 10.1.3.3/24
            port 2/1/3
            qos 20
            no shutdown
        exit
        interface "toPE2"
            address 10.2.3.3/24
            port 2/1/4
            qos 20
            no shutdown
        exit

PE2

*A:PE2>config>router# info 
----------------------------------------------
        interface "toP"
            address 10.2.3.2/24
            port 2/1/4
            qos 20
            no shutdown
        exit

The new network QoS Marking config now support the carrier’s traffic QoS marking scheme.

Conclusion

Inter-Switch network QoS design is a complex subject. In both Part 1 and Part 2 of this articles, although Nokia’s Service Router QoS constructs such as ip-criteria, sap-ingress, sap-egress, pir, cir, nework etc… are used to illustrate the end-to-end network service QoS design, similar QoS constructs can be found in all carrier-grade routers.

We just discussed the Hierarchical Scheduler for bandwidth sharing for the ePipe’s triple-play traffic. Either the Priority Scheduler shown in Part 1 of this article or the Hierarchical scheduler discussed in this article can be used for the ePipe network service depending on the needs of the service.

Network Marking/Remarking enables routers from different vendors to interoperate as far as traffic QoS is concerned. With both the Access and Network QoS policies in PE and P routers, one can design satisfactory network services that can accomondate many different traffic types and QoS requirements.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s