Skip to main content

Benchmarking of Y1731 Performance Monitoring
draft-jacpra-bmwg-pmtest-01

The information below is for an old version of the document.
Document Type
This is an older version of an Internet-Draft whose latest revision state is "Expired".
Author Praveen Ananthasankaran
Last updated 2016-05-20
RFC stream (None)
Formats
Stream Stream state (No stream defined)
Consensus boilerplate Unknown
RFC Editor Note (None)
IESG IESG state I-D Exists
Telechat date (None)
Responsible AD (None)
Send notices to (None)
draft-jacpra-bmwg-pmtest-01
Network Working Group                       Sudhin Jacob 
Internet Draft                              Praveen Ananthasankaran
Intended Status: Informational              Juniper Networks  
Expires: November 20,2016                    May 20,2016
                                                  

                Benchmarking of Y1731 Performance Monitoring
                                draft-jacpra-bmwg-pmtest-01  
                                

                                
Abstract

The draft defines the methodologies for benchmarking of the Y1731 
performance monitoring on DUT in various methods like Calculation 
of near-end and far-end data. Measurement is done in scenarios by
using pre-defined COS and without COS in the network.The test 
includes Impairment test, Control plane Failover test and soak tests.

Status of This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."
   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on November 20, 2016.

Copyright Notice

Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved.

IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(ii), paragraph 3:
This document is subject to BCP 78 and the IETF Trust's Legal Provisions
Relating to IETF Documents (http://trustee.ietf.org/license-info)
in effect on the date of publication of this document.  Please
review these documents carefully, as they describe your rights
 and restrictions with respect to this document.
 
 Relating to IETF Documents (http://trustee.ietf.org/license-info)
in effect on the date of publication of this document. Please
review these documents carefully, as they describe your rights
and restrictions with respect to this document. Code Components
extracted from this document must include Simplified BSD License
text as described in Section 4.e of the Trust Legal Provisions
and are provided without warranty as described in the Simplified 
BSD License.

Expires November 20,2016                               [Page 1]

Table of Contents

1. Introduction.........................3

2. Terminologies........................4

3. Test Topology........................5

4. Y.1731 Two-way Delay Measurement Test procedure........6

5. Loss measurement without COS Test Procedure............9

6. Loss measurement with COS Test Procedure................11

7. Synthetic Loss Measurement Test Procedure...............13

8.Acknowledgements............................15

9. Security Considerations....................15

10.IANA Considerations........................15

Expires November 20,2016                               [Page 2]

1. Introduction

Performance monitoring is explained in ITU Y1731.This document defines
the methodologies for benchmarking performance of Y1731 over a point to 
point service. Performance Monitoring has been implemented with
many varying designs in order to achieve their intended network functionality.
The scope of this document is to define methodologies for benchmarking Y1731
performance measurement. The following protocols under Y.1731 will be benchmarked.
 
 
1.      Two-way delay measurement 
2.      One-way delay measurement
3.      Loss measurement
4.      Synthetic loss measurement

 Expires November 20,2016                               [Page 3]

2. Terminologies

PM   Performance monitoring

In-profile  CIR termed as green packets.

Out-profile EIR Yellow/Amber packet.

LMM  Loss Measurement Message

LMR  Loss Measurement Reply

DMM   Delay Measurement Message

DMR  Delay MEasurement Reply

P Router  Provider Router.

PE Router  Provider Edge Router

CE Router  customer Edge  Router

DUT  Device under Test.

CCM       Continuity check messages

Expires November 20,2016                               [Page 4]
 

2.   Test Topology

         | Traffic Generator
+----------+
|          |
|  PE2     |
|          |
+----------+
    |
    |
+----------+
|          |
|  Core    |
|  router  |
+----------+
   |                     
   |                     
+----------+       
|          |       
|   DUT    |       
|    PE1   |       
+----------+       
     |                  
     |--- Traffic Generator

3. Network

The benchmarking topology consists of 3 routers and 2 traffic generators. 
DUT is PE1 connected to CE. The core router is the P router mentioned in the 
topology. There is layer two(point-to-point) services running from PE1 to PE2. 
On the top of that performance, monitoring loss and delay measurements are
running.PE1 is acting as DUT.

4. Y.1731 Two-way Delay Measurement Test procedure

4.1. Basic Testing Objective

Check the round trip delay of the network in different conditions of 
traffic load in the network.

4.2. Test Procedure

Configure a layer 2 point-to-point service between PE1 and PE2. 
Configure Y.1731 Two way delay measurement over the service.Observe
the delay measurement in the following conditions of traffic in the network

a.      Send 80% of Line-rate traffic with different priorities
b.      Send 40% of Line-rate traffic with different priorities
c.      Without any line traffic

The result of all the 3 conditions above are noted and correlated. 

4.3. Test Measurement

The following factors needs are to be measured to benchmark the result
1.      The average two-way delay
2.      The average two-way delay variation 

4.4. Impairment

This is to benchmark two-way delay measurement even when both data and PDUs 
are dropped in the network using the impairment tool. 

4.5. Soak

  The bi directional traffic is send over service over 24 to 48 
  hours and measure after the stipulated time there must not be 
  any change in behavior in the network for performance monitoring
  
  
4.6. Reliability

This is to measure the statistics will not be showing any 
drastic result or anomaly while running over a period.

4.7.  Result
^
|
+--------------+  Delay Variation
|
+---------------->    
 Traffic (0 to 100 percent line rate)
 

One-Way delay measurement Test Procedure

4.8. Basic Testing Objective

The test defined to measure the one-way delay measurement. 
One-way delay measurement as defined in Y.1731 is the delay
of the packet to originate from a specific end-point till it 
reached the other end of the network. The measurement of 
this mandates the clock to be accurately synchronized as 
the delay is computed based on the time of two different end-points.    

4.9. Test Procedure

Configure a layer2 point-to-point service between PE1 and PE2. 
Configure Y.1731 one-way delay measurement over the service. 
Observe the delay measurement delay measurement in the following
conditions of traffic in the network

a.      Send 80% of Line-rate traffic with different priorities
b.      Send 40% of Line-rate traffic with different priorities
c.      Without any line traffic

The result of all the 3 conditions above are noted and correlated. 

4.10. Test Measurement

The following factors needs to be measured to benchmark the result

        The average one-way delay
        The average one-way delay variation 

4.11. Impairment

This is to benchmark one-way delay measurement even when both data
and PDUs are dropped in the network using the impairment tool. 

4.12. Soak

The bi directional traffic is send over service over 24 to 48 
hours and measure after the stipulated time there must not be
any change in behavior in the network for performance monitoring

4.13.Reliability

This is to measure the statistics wont be showing any drastic result
or anomaly while running over a period of time.

4.14. Result

|
+--------------+  Delay Variation
|
+---------------->    
 Traffic (0 to 100 percent line rate)                         
    

5. Loss measurement without COS Test Procedure

5.1. Basic Testing Objective

The test defined methodology for benchmarking data loss in the network
on real customer traffic. The Y.1731 indicates to consider only 
in-profile (green) packet for loss measurement. For this, the testing
needs to be done in multiple environment where

 a.All data packets from traffic generator are sent with single 802.1p
 priority and the network do not have a COS profile defined.
 
 b.All data packets from traffic generator are sent with 0 to 7
 values for 802.1p priority and the network do not have a COS profile 
 defined.
 
The objective is to benchmark the protocol behavior under different
networking conditions and correlate the data.The objective is not 
to test the actual functioning of Y.1731 Loss measurement.
The loss measurement must count only in profile packet, 
since there is no COS defined.All the packets must be 
recorded as green.
 

5.2. Test Procedure

Configure a layer2 point-to-point service between PE1 and PE2. 
Configure Y.1731 loss measurement over the service. 
Observe the loss measurement in the following conditions of traffic in 
the network

a.Send 80% of Line-rate traffic with different priorities
b.Send 40% of Line-rate traffic with different priorities
c.Without any line traffic

The result of all the 3 conditions above are noted and correlated. 

5.3. Test Measurement

The factors which need to be considered is the acceptable absolute
 loss for the given network.  
 
5.4. Impairment

This is to benchmark loss measurement even when both data and PDUs
 are dropped in the network using the impairment tool. 
 
5.5. Soak

The bi directional traffic is send over service over 24 to 48
 ours and measure after the stipulated time there must not be
any change in behavior in the network for performance monitoring

5.6. Reliability

This is to measure the statistics wont be showing any drastic
result or anomaly while running over a period of time.

5.7. Result

+----------------------------------+
| Traffic sent    |Loss measurement|
| over the service|(without cos)   |
| for bi direction|                |
+----------------------------------+
| 7 Streams at    | Near End = 100%|
| 100% line rate  | Far End = 100% |
| with priority   |                |
| from 0 to 7     |                |
+----------------------------------+
| Dropping 50%    | Near End  50%  |
| of line rate    | Far end  100%  |
| at near end.    | Near End loss  |
|                 | observed  50%  |
|                 |                |
+----------------------------------+
| Dropping 50%    |Near End   100% |
| of line rate    | Far end  50%   |
| at far  end.    | Far End Loss   |
|                 | observed  50%  |
+-----------------+----------------+

6. Loss measurement with COS Test Procedure

6.1. Basic Testing Objective

The test defined methodology for benchmarking data loss in the network on
real customer traffic. The Y.1731 indicates to consider only in-profile(green) 
packet for loss measurement. For this, the testing needs to be done in multiple
environment where 

a. All data packets from traffic generator are sent with single 802.1p 
priority and the network have pre-defined COS profile defined. 

b. All data packets from traffic generator are sent with 0 to 7 values
 for 802.1p priority and the network have pre-defined COS profile defined.
 
The COS profile defined needs to have 2 factors

a.      COS needs to treat different 802.1p as separate class of packets.
b.      Each Class of packets needs to be an defined CIR for the specific network.

The objective is to benchmark the protocol behavior under different
networking conditions and correlate the data. The objective is not 
to test the actual functioning of Y.1731 Loss measurement. 
The loss measurement must show in profile packet for each COS levels. 
Each COS level must count only its own defined in profile packets.
The Packets, which are termed, as out profile by COS marking must not be counted.
 
6.2. Test Procedure

Configure a layer2 point-to-point service between PE1 and PE2. 
Configure Y.1731 loss measurement over the service. 
Observe the loss measurement in the following conditions of traffic 
in the network.

d.      Send 80% of Line-rate traffic with different priorities
e.      Send 40% of Line-rate traffic with different priorities
f.      Without any line traffic

The result of all the 3 conditions above are noted and correlated. 

6.3. Test Measurement

The factors which need to be considered is the acceptable absolute
loss for the given network.  

6.4. Impairment

This is to benchmark loss measurement even when both data and PDUs are 
dropped in the network using the impairment tool. 

6.5. Soak

The bi directional traffic is send over service over 24 to 48 
hours and measure after the stipulated time there must not be 
any change in behavior in the network for performance monitoring

6.6. Reliability

This is to measure the statistics wont be showing any drastic result 
or anomaly while running over a period of time.

6.7. Result

+----------------------------------+
| Traffic sent    |Loss measurement|
| over the service|(With cos)      |
| for bi direction|                |
+----------------------------------+
| 7 Streams at    | Near End = 100%|
| 100% line rate  | Far End = 100% |
| with priority   |                |
| from 0 to 7     |                |
+----------------------------------+
   | Dropping 50%    | Near End   50%|
   | of line rate    | Far end 100% |
| at near end     | Near End loss  |
| for priority    | observed  50% |
| marked 0        | (priority 0)   |
+----------------------------------+
| Dropping 50%    |Near End   100%|
| of line rate    | Far end  50%  |
| at far  end for | Far End Loss   |
| priority 0      | observed  50% |
|                 | (priority 0)   |
+-----------------+----------------+

7. Synthetic Loss Measurement Test Procedure

7.1. Basic Testing Objective

The test defined methodology for benchmarking synthetic loss in the network. 
The testing needs to be done in multiple environment where 

c.      All data packets from traffic generator are sent with single 802.1p 
priority and the network do not have a COS profile defined. 
The synthetic loss measurement also uses the same 802.1p priority as that of traffic. 

d.      All data packets from traffic generator are sent with single 802.1p priority and 
the network have pre-defined COS profile defined. The synthetic loss measurement 
also uses the same 802.1p priority as that of traffic.  

e.      All data packets from traffic generator are sent with 0 to 7 
values for 802.1p priority and the network do not have a COS profile 
defined. The synthetic loss measurement also uses the same 802.1p priority 
as that of traffic. Hence 8 sessions are tested in parallel.

f.      All data packets from traffic generator are sent with 0 to 7 
values for 802.1p priority and the network have pre-defined COS profile defined.
The synthetic loss measurement also uses the same 802.1p priority as that of 
traffic. Hence 8 sessions are tested in parallel.

The COS profile defined needs to have 2 factors
c.      COS needs to treat different 802.1p as separate class of packets.

d.      Each Class of packets needs to have defined CIR for the specific network.

The objective is to benchmark the protocol behavior under different networking 
conditions and correlate the data. The objective is not to test the 
actual functioning of Y.1731 Loss measurement.
 
7.2. Test Procedure

Configure a layer2 point-to-point service between PE1 and PE2. 
Configure Y.1731 loss measurement over the service. Observe the synthetic
 loss measurement in the following conditions of traffic in the network
 
g.      Send 80% of Line-rate traffic with different priorities
h.      Send 40% of Line-rate traffic with different priorities
i.      Without any line traffic
The result of all the 3 conditions above are noted and correlated. 

7.3. Test Measurement

The factors which need to be considered is the acceptable absolute loss 
for the given network.  

7.4. Impairment
This is to benchmark synthetic loss measurement even when both data and PDUs are
 dropped in the network using the impairment tool. 
 
7.5. Soak
The bi directional traffic is send over service over 24 to 48 hours 
and measure after the stipulated time there must not be any change in 
behavior in the network for performance monitoring.

7.6. Reliability

This is to measure the statistics wont be showing any 
drastic result or anomaly while running over a period of time.

8.  Acknowledgements

We would like to thank Al Morton of (ATT) for their support and encouragement. 
We would like to thank Fioccola Giuseppe of Telecom Italia reviewing our 
draft and commenting it.

   
9. Security Considerations

NA

10.IANA Considerations

NA
 
Appendix A.  Appendix

Authors' Addresses

Praveen Ananthasankaran
Juniper Networks
Bangalore
        
Email: panantha@juniper.net

Sudhin Jacob
Juniper Networks
Bangalore       

Email: sjacob@juniper.net
       sudhinjacob@rediffmail.com