TOC 
Internet Engineering Task ForceT. Ahuja
Internet-DraftCisco Systems, Inc.
Intended status: InformationalT. Alexander
Expires: July 5, 2008VeriWave, Inc.
 S. Bradner
 Harvard University
 S. Hooda
 Cisco Systems, Inc.
 J. Perser
 VeriWave, Inc.
 M. Sambi
 Cisco Systems, Inc.
 January 02, 2008


Benchmarking Methodology for Wireless LAN Switching Systems
draft-alexander-bmwg-wlan-switch-meth-01

Status of this Memo

By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as “work in progress.”

The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt.

The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html.

This Internet-Draft will expire on July 5, 2008.

Abstract

This document provides a framework and methodology for performing performance test and benchmarking of wireless LAN (WLAN) switches and controllers, including systems comprising groups of controllers and WTPs. This document defines and discusses a number of tests and associated test conditions that may be used to characterize the performance of such systems, and also supplies the methods used to calculate the expected results of these tests. Specific formats for reporting the results of the tests are also provided, where applicable. The tests described in this document extend the methodology defined for benchmarking network interconnect devices in RFC 2544, and LAN switches in RFC 2889, to WLAN switch controller systems. The methodology herein is to be used together with the companion terminology document.



Table of Contents

1.  Introduction

2.  Existing definitions and requirements

3.  General description and test setups
    3.1.  Tester functional model
    3.2.  Test setups
    3.3.  Configuration parameters
        3.3.1.  WTP setup
        3.3.2.  Service priority
        3.3.3.  Test conditions

4.  Interpreting and reporting test results

5.  Benchmarking tests
    5.1.  Data plane tests
        5.1.1.  Unicast throughput
        5.1.2.  Unicast maximum forwarding rate and frame loss ratio
        5.1.3.  Multicast forwarding rate
        5.1.4.  Latency and jitter
        5.1.5.  QoS differentiation
        5.1.6.  Power-save throughput
    5.2.  Control plane tests
        5.2.1.  Endstation roaming delay
        5.2.2.  Endstation roaming rate
        5.2.3.  Endstation association rate
        5.2.4.  Endstation capacity
        5.2.5.  WTP capacity
        5.2.6.  Reset recovery time
        5.2.7.  Failover recovery time

6.  Security Considerations

7.  IANA Considerations

8.  References
    8.1.  Normative References
    8.2.  Informative References

Appendix A.  Intended load computations
    A.1.  Calculating theoretical maximum media capacity
    A.2.  Calculating constant intended load
    A.3.  Calculating burst intended load

§  Authors' Addresses
§  Intellectual Property and Copyright Statements




 TOC 

1.  Introduction

Wireless LANs (WLANs) are deployed on a large scale in traditional enterprises, in commercial service offerings such as coffee shops, and in vertical applications such as inventory management. Large deployments of WLANs, however, introduce several issues: an increased administrative burden due to the use of IP-addressable Wireless Termination Points (WTPs - i.e., Access Points); the need to ensure consistency of configuration across all WTPs; the need to deal with the dynamic nature of the WLAN medium, and to combat interference; and the increased need for securing the network against unauthorized intrusion or access.

To address the above problems, vendors offer solutions that combine aspects of LAN switching, centralized control, and distributed wireless access in an architecture comprising a set of relatively simple Wireless termination points (WTPs) coupled to one or more Access controllers (ACs). The use of centralized control and monitoring simplifies many of the management and security issues noted above, as the WTPs can be configured and controlled as a group by the ACs, security policies can be administered on a WLAN-wide basis, and the RF domain can be monitored and controlled from a central location.

Each vendor offering such a system needs a protocol between ACs and WTPs to support both centralized management and data transport functions. The general practice has been for vendors to use a proprietary protocol; however, the CAPWAP (Control and Provisioning of Wireless Access Points) protocol is being standardized by the IETF to provide a multi-vendor interoperable interface between WTPs and ACs. The CAPWAP protocol also defines a standardized WLAN architecture and mandatory functions (such as discovery) to enable a common functional model to be adopted across the vendor base.

The ACs may perform both control plane and data plane functions within a WLAN. It is therefore of significant interest to benchmark their performance, as they have a material impact on the performance and perceived end-user experience of WLANs built around them. ACs may be benchmarked either as stand-alone entities, or in conjunction with the WTPs to which they connect. When ACs are benchmarked in conjunction with WTPs, the CAPWAP architectural model is used as a reference.

This document defines and describes a test methodology that may be used by vendors and users of IEEE 802.11 Wireless LAN (WLAN) [802.11] (IEEE, “ANSI/IEEE Std 802.11 "Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," ISO/IEC 8802-11:1999(E), ISBN 0-7381-1658-0,” 1999.) switch controllers to measure and report performance characteristics of such devices. It extends the methodology that was originally defined for benchmarking network interconnecting devices in RFC 2544 [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.), and then subsequently extended to other types of devices (such as LAN switching devices in RFC 2889 [RFC2889] (Mandeville, R. and J. Perser, “Benchmarking Methodology for LAN Switching Devices,” August 2000.)), to cover IEEE 802.11 WLAN devices.

Note that this document does not specify RF-related tests, or performance benchmarks that pertain to the IEEE 802.11 link layer. Instead, this document focuses on datapath and control path related measurements performed above the link layer on ACs, or combinations of ACs and WTPs. For RF-related tests, link layer metrics and tests on individual WTPs, the reader is referred to the IEEE 802.11.2 draft Recommended Practice on Wireless Performance [802.11.2] (IEEE, “IEEE P802.11.2, "Draft Recommended Practice for the Evaluation of 802.11 Wireless Performance",” 2007.), which describes such tests.



 TOC 

2.  Existing definitions and requirements

RFC 2544, "Benchmarking Methodology for Network Interconnect Devices" [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.) and RFC 2889, "Benchmarking Methodology for LAN Switching Devices" [RFC2889] (Mandeville, R. and J. Perser, “Benchmarking Methodology for LAN Switching Devices,” August 2000.), provide useful background information and context, and SHOULD be reviewed before conducting tests based on this document. WLAN-specific terms and definitions in this document are described in Clauses 3 and 4 of the IEEE 802.11 standard [802.11] (IEEE, “ANSI/IEEE Std 802.11 "Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," ISO/IEC 8802-11:1999(E), ISBN 0-7381-1658-0,” 1999.).

For the sake of clarity and continuity this RFC adopts the general template for benchmarking tests set out in Section 26 of RFC 2544.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119] (Bradner, S., “Key words for use in RFCs to Indicate Requirement Levels,” March 1997.).



 TOC 

3.  General description and test setups

A common set of test setup and measurement conditions is used across all of the tests described in this document. Exceptions to these conditions are noted, if necessary, in the descriptions of the individual tests.



 TOC 

3.1.  Tester functional model

For the purposes of this document, the tester is defined as a separate device that is used to transmit controlled test traffic to the physical ports of the device under test (DUT) or system under test (SUT), as well as to receive and measure test traffic from the physical ports of the DUT or SUT. The tester MUST NOT be a part of the DUT or SUT, nor can the DUT or SUT provide any portion of the reported test results.

The tester MUST transmit conformant traffic to the DUT or SUT during the tests described herein, and MUST follow the rules of the relevant protocol with respect to media access and frame exchanges. It MAY be configured to transmit non-conformant traffic for special purposes (e.g., for debug), but this is outside the scope of this document. The tester MUST support some means of distinguishing test traffic (either injected into or emitted by the DUT or SUT) from normal data, control and management frames that are generated by the DUT or SUT itself. The tester SHOULD further support means of unambiguously determining frame loss and frame duplication (e.g., by the use of sequence numbers), as well as time-stamping transmitted and received frames.

No constraints are placed by this document on the specific implementation of the tester or test system, provided that it is capable of measuring DUT or SUT responses to the required degree of accuracy, establishing the required test conditions at the physical interfaces of the DUT or SUT, and generating test traffic with the relevant parameters. These parameters include frame sizes, offered load, burst sizes and inter-burst gap, signal output level, RTS/CTS setting, and fragmentation setting.



 TOC 

3.2.  Test setups

The general test setup comprises a DUT or SUT and a tester, as shown in the figure below. The tester has at least one wired (Ethernet) interface to the DUT or SUT; the other interface(s) may be either wired or wireless (i.e., Ethernet or 802.11). In most cases, the DUT or SUT has multiple interfaces that must be driven, and so the tester likewise will have multiple wired and/or wireless interfaces.




          802.11-Side Interfaces      Ethernet-Side Interfaces
                    +---------------------------+
          +-------->|                           |<----------+
          | .......>|         DUT or SUT        |<......... |
          | : .....>|                           |<......  : |
          | : :     +---------------------------+      :  : |
          | : :                                        :  : |
          | : :         +-------------------+          :  : |
          | : :........>|                   |<.........:  : |
          | :..........>|       Tester      |<............: |
          +------------>|                   |<--------------+
                        +-------------------+
 Figure 1 

The DUT or SUT may be comprised of two different arrangements, leading to two variations on the above general test setup. One arrangement has only one or more ACs, but no WTPs; in this case, the tester must emulate the WTPs as well as the endstations behind them, and present the aggregate to the AC(s). In the second arrangement, the device is actually a SUT comprising both WTPs and ACs (treated as a single system); in this case, the tester only emulates the endstations and presents them to the WTPs.

The following figure shows the first setup (DUT), with a single tester that interfaces to a collection of one or more ACs. Note that the ACs may be physically distinct (i.e., implemented in separate chassis) or logically distinct (i.e., implemented in a single chassis), but in either case is expected to form a logical whole for the purposes of management, addressing, endstation association, and traffic handling.




                                  +-----+
                 Set of 1 or    +-----+ |<-----------+
                 more ACs    +------+ |-+            |
           +---------------->|  AC  |-+              |
           |                 +------+                |
           |                                         |
           |                                         |
           |        +---------------------------+    |
           |        |                           |    |
           +------->|          Tester           |<---+
                    |                           |
                    +---------------------------+
 Figure 2 

The figure below shows the second setup (SUT), again with a single tester that interfaces to a set of WTPs connected to one or more ACs. As before, the ACs may be physically or logically distinct.




                +------+
         +----->| WTP1 |----+
         |      +------+    |
         |                  |             +------+
         |                  |           +------+ |<----+
         |      +------+    |         +------+ |-+     |
         | +--->| WTP2 |----|---------|  AC  |-+       |
         | |    +------+    |         +------+         |
         | |        .       |                          |
         | |        .       |                          |
         | |    +------+    |                          |
         | | +->| WTPn |----+                          |
         | | |  +------+                               |
         | | |                                         |
         | | |                                         |
         | | |      +---------------------------+      |
         | | +----->|                           |      |
         | +------->|          Tester           |<-----+
         +--------->|                           |
                    +---------------------------+
 Figure 3 

A logical diagram of the test setup usually entails multiple VLANs configured on the Ethernet side of the DUT or SUT, and multiple WLANs configured on the wireless side. This is represented in the figure below, which shows a SUT comprising several WTPs interfaced to an AC, with three WLANs on the wireless side that are logically connected to three VLANs on the Ethernet side. Note that a one-to-one correspondence of WLANs to VLANs is usual but not required.




        WLAN1,ES1---+-----+
        WLAN2,ES2---| WTP |--+                +-----+
        WLAN3,ES3---+-----+  |             +--| H1  |--VLAN1
                             |             |  +-----+
                             |             |
        WLAN1,ES1---+-----+  |   +-----+   |  +-----+
        WLAN2,ES2---| WTP |--+---| AC  |---+--| H2  |--VLAN2
        WLAN3,ES3---+-----+  |   +-----+   |  +-----+
                             |             |
                             |             |
        WLAN1,ES1---+-----+  |             |  +-----+
        WLAN2,ES2---| WTP |--+             +--| H3  |--VLAN3
        WLAN3,ES3---+-----+                   +-----+
 Figure 4 



 TOC 

3.3.  Configuration parameters

The general DUT or SUT setup MUST follow the requirements described in Section 7 of RFC 2544 [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.).

The specific software or firmware version being used in the DUT or the individual devices that make up the SUT, as well as the exact device configuration(s) (including any functions that have been disabled) MUST be reported together with the results.



 TOC 

3.3.1.  WTP setup

This section is applicable only if the SUT includes WTPs (i.e., APs).

The WTPs in the SUT MUST be configured to use only the subset of wireless channels available to a normal user at the location where the system is intended to be used. For example, if the test is run in the U.S., then standard U.S. wireless channels are used. The channels used MUST be reported with the test results.

The 802.11 protocol supports the use of a Request To Send (RTS) / Clear To Send (CTS) handshake prior to data transfer, as a means for interfaces to seize and reserve the medium before actually transferring data. For SUTs or WTPs with adjustable RTS thresholds, tests MAY be run at different RTS thresholds, although a full suite of tests MUST be run at the highest RTS threshold supported by the SUT. The RTS threshold used MUST be reported with the test results.

The 802.11 protocol supports fragmentation and reassembly at the link layer, in order to decrease retransmission overhead under high error rates that may prevail in a radio frequency (RF) environment. For SUTs or WTPs with adjustable fragmentation thresholds, tests MAY be run at different fragmentation thresholds, although a full suite of tests MUST be run at the highest fragmentation threshold supported by the SUT. The fragmentation threshold used MUST be reported with the test results.

Note that RTS/CTS and fragmentation are not used when transferring multicast frames; they apply only to unicast frames.



 TOC 

3.3.2.  Service priority

WLAN ACs frequently include the capability to classify traffic flows and assign them to different service levels or service priorities. For example, a voice flow may be classified as real-time traffic and assigned a high level of service (e.g., with assured limits on delay and loss), while an HTTP stream may be assigned a lower service priority. Classification may also be made on the basis of DiffServ code points (DSCP) or even 802.1D user priority values.

For DUTs or SUTs supporting multiple service priorities (QoS levels), tests MAY be run at different service priorities, although a full suite of tests SHOULD be run at least one service priority. For such DUTs or SUTs, the service priority used in each test MUST be reported with the test results.

Throughput and latency tests on WTPs involving traffic traversing wired interfaces can be affected by QoS settings on these wired interfaces. In such situations, the QoS settings assigned to the wired interfaces of WTPs MUST be reported with the test results.



 TOC 

3.3.3.  Test conditions

Test conditions for measurements on WLAN devices are covered in this section. The complexity of the wireless LAN media and protocol necessitate special attention to specifying and setting up these conditions in order to obtain consistent results.



 TOC 

3.3.3.1.  Test environment

This section is only applicable to SUTs containing WTPs.

Wireless LAN test environments may be divided into two general categories: shielded environments and open-air (unshielded) environments.

Shielded environments use cabling and/or RF shielding techniques to significantly attenuate external signals and noise. The WTP(s) that are part of the SUT are enclosed within an RF-tight chamber and cabled to the tester as well as the AC or ACs. The tester is also placed in an RF-tight chamber, or has an RF-tight enclosure. Such environments are well known to provide the highest level of repeatability and reproducibility, with the minimum amount of complicated RF management and setup issues. Different types of enclosures (shielded enclosures, screened rooms, and anechoic chambers) may be employed to provide RF shielding.

Open-air environments mimic the actual use model of a WLAN DUT or SUT. In this case, the WTP(s) that are part of the SUT are placed at a specific location within some moderately controlled (or at least well characterized) indoor or outdoor environment, and antennas are used for coupling between equipment.

To assure the maximum level of repeatability and reproducibility without complicated environment setup and characterization needs, the tests described in this document MUST be carried out in a fully shielded (conducted) test environment. The use of a fully shielded environment eliminates the possibility of interference from surrounding networks or devices emitting RF energy. The enclosures and cabling used MUST provide a minimum of 80 dB of RF isolation. Provided that the power levels are set as described below, the test results are materially independent of the exact type of shielded test environment and the properties of the enclosures used.



 TOC 

3.3.3.2.  Power levels

This section is only applicable to SUTs containing WTPs.

Power levels are generally set by measuring and controlling the Received Signal Strength Indication (RSSI) at the RF receivers within the setup. The RSSI measured at the tester's receiver SHOULD be in the range of -25 dBm to -35 dBm. Similarly, the RSSI measured at each WTP's receiver SHOULD be in the range of -25 dBm to -35 dBm. Passive attenuators SHOULD be used to control and set the RSSI within these limits. The RSSI MUST be measured and reported with the test results. The actual choice of WTP power level should not materially affect the results of the tests described herein, provided that the RSSI values fall within these limits.

If these power level settings are not used, then the tester MUST ensure that the RF power levels (at the receivers of the WTP(s) and tester) are at least 20 dB above the minimum and 10 dB below the maximum levels specified for a 10% Frame Error Ratio (FER). Further, the tester MUST ensure that the absolute signal level transmitted to the DUT or SUT is held constant to within +/- 3 dB over the duration of the trial.



 TOC 

3.3.3.3.  Data plane test frame sizes

All of the data plane tests SHOULD be performed using several fixed sizes of test data frames. Regardless of the interface type, frame sizes MUST be calculated from the first byte of the MAC header to the last byte of the FCS. The test results MUST list the frame sizes used for test data frames.

MAC frame sizes change drastically when traffic moves from the 802.11 (WLAN) side to the 802.3 (Ethernet) side, and vice versa. Further, the 802.11 MAC protocol specifies various mode-specific encapsulations; for instance, the four currently defined encryption modes (none, WEP, TKIP and AES-CCMP) result in four different 802.11 MAC frame sizes for the same size Ethernet frame, and enabling QoS on the WLAN links adds another 2 bytes to the WLAN MAC header. Use of CAPWAP or similar protocols on the wireless-facing side of an AC adds an encapsulation header to the 802.11 MAC frame. This makes frame size calculations and reporting of results quite challenging.

To avoid confusion, therefore, frame sizes reported with the test results MUST be referenced to the Ethernet side of the DUT or SUT. Further, the DUT or SUT configuration parameters such as encryption mode and QoS, necessary to enable the equivalent WLAN-side frame size to be calculated from the Ethernet side frame size, MUST be reported as well.

The change in frame size also leads to confusion when interpreting results expressed in bits/second or megabits/second. Test results SHOULD therefore be expressed, where applicable, in frames/second of a particular size on the Ethernet. The equivalent bits/second or megabits/second values MAY be provided as well, as long as the test reports include at least the values for the Ethernet interface and clearly identify the interface(s) to which these values apply.

Note that all test data frames crossing the WLAN/Ethernet boundary contain IP packets; in fact, due to the change in MAC encapsulation when traversing this boundary, only the IP information will be transported intact. Therefore, test results MAY also be expressed in terms of IP packets/second of a particular size. As far as the test data traffic is concerned, the number of IP packets per second on any path traversing the DUT or SUT is exactly equal to the number of link layer frames/second measured at each of the interfaces traversed by that path.

Referenced to the Ethernet side, the MAC frame sizes that MUST be used during data plane tests are:

88, 128, 256, 512, 1024, 1280, 1518

Note that the smallest frame size is 88 bytes, instead of the 64 bytes more commonly encountered in standard wired LAN benchmarks. A larger frame size is selected in order to provide enough room in the payload for an internetwork layer header, a transport layer header, and a tag or signature field that SHOULD be inserted by the test equipment to track the data traffic packets. An 88 byte MAC frame size allows at least 26 bytes of payload for a signature field.

If the test equipment is capable of properly marking and tracking 64-byte Ethernet frames (with or without 802.1Q VLAN tags present), then 64-byte MAC frames MUST be used during tests, in addition to the frame sizes listed above.

If the wired interfaces of the DUT or SUT can support jumbo frames (i.e., Ethernet frame sizes larger than 1518 bytes), then the following MAC frame sizes MAY additionally be used:

2030, 2322

The maximum Ethernet MAC frame size of 2322 bytes ensures that the re-encapsulated payload on the WLAN side does not exceed the maximum IEEE 802.11 MAC payload size of 2304 bytes.

These frame sizes apply to Ethernet MAC frames irrespective of whether they contain VLAN tags or not. If 802.1Q VLAN tagging is used on the Ethernet side, then this MUST be reported with the test results, as the consequence is to make the 802.11-side MAC frames 4 bytes smaller.



 TOC 

3.3.3.4.  Control plane test frame sizes

The control plane tests described in this document also involve test traffic data flows generated by the tester as part of the measurement process. Unless otherwise specified, a frame size of 256 bytes (referenced to the Ethernet side) MUST be used for these test data flows. (This value is arbitrarily selected, but specified to ensure reproducibility of tests.) If 802.1Q VLAN tagging is used, this will cause the WLAN-side frame size to be smaller, and thus this MUST be noted in the test results. It is not necessary to repeat control plane tests with other MAC frame sizes, though this MAY be done if desired.

Frame sizes of 802.11 management and control frames generated during the test MUST conform to those required by the 802.11 standard [802.11] (IEEE, “ANSI/IEEE Std 802.11 "Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," ISO/IEC 8802-11:1999(E), ISBN 0-7381-1658-0,” 1999.).



 TOC 

3.3.3.5.  Frame formats and verification

The frame formats used for test data frames (with the exception of null 802.11 frames) SHOULD follow the recommendations in Appendix C of RFC 2544 [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.). LLC/SNAP encapsulation as per RFC 1042 [RFC1042] (Postel, J. and J. Reynolds, “Standard for the transmission of IP datagrams over IEEE 802 networks,” February 1988.) MUST be used on the WLAN side of the DUT or SUT, and Type-encoded encapsulation as per IEEE 802.3 [802.3] (IEEE, “ANSI/IEEE Std 802.3, "Part 3: Carrier sense multiple access with collision detection (CSMA/CD) access method and physical layer specifications," ISBN 0-7381-4740-0,” 2005.) MUST be used on the Ethernet side.

In all cases, the test frame format MUST contain some means (such as a unique signature field, as described in Section 4 of RFC 2544 [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.)) that will enable the tester to filter out frames that are not part of the offered load, or are duplicated by the DUT. In tests on DUTs/SUTs involving multiple virtual or physical test endstations, the test frame format MAY also support means for distinguishing between frames originating from different endstations.

The provisions for verifying received frames in Section 10 of RFC 2544 [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.) SHOULD be followed as well. This is particularly significant for test setups where the tester connects to an 802.11 wireless interface, as 802.11 implements retransmission at the link layer. The verification of received frames SHOULD be independent of the facilities provided by the MAC, IP and TCP/UDP layers. In the case of data plane tests, the tester MAY support an independent means of verifying the absence of end-to-end payload corruption, such as a checksum or CRC calculated over the payload portion of the data frames.



 TOC 

3.3.3.6.  Addressing

To ensure independence of test results relative to address patterns, the test setup SHOULD follow the recommendations in RFC 4814 [RFC4814] (Newman, D. and T. Player, “Hash and Stuffing: Overlooked Factors in Network Device Benchmarking,” March 2007.), to create pseudorandom MAC and IP address patterns. However, as noted in Section 4.2.1 of RFC 4814, ACs, particularly those implementing ARP proxies and spoofing protection, can reject endstations that change their MAC and/or IP address mappings from trial to trial. To avoid this, the address mappings used MUST NOT change between trials or between consecutive tests that are performed without sufficient time for the context entries in the AC to age out.

DHCP MAY be used to provide IP addresses to endstations in data plane tests. If this is done, then the test results MUST note that DHCP was being used. DHCP MAY NOT be used in control plane tests unless specifically required as one of the test conditions. Note that many ACs require DHCP to be used to provide IP addresses to WTPs; in this situation, a DHCP server is needed for this purpose. If all of the WTPs (and/or all of the endstations) cannot be located within one subnet, then DHCP helper addresses MUST be configured to allow the use of a single DHCP server.

In multicast tests, multicast IP addresses MUST be drawn from the Class D address pool and multicast MAC addresses MUST correspond to the Class D IP addresses, as described in RFC 3918 [RFC3918] (Stopp, D. and B. Hickman, “Methodology for IP Multicast Benchmarking,” October 2004.) and RFC 1112 [RFC1112] (Deering, S., “Host extensions for IP multicasting,” August 1989.). IGMP MUST be active, and the IGMP version being used MUST be reported with the test results.



 TOC 

3.3.3.7.  Traffic flows and topologies

Traffic flows used in the data and control plane tests transit the AC within the DUT or SUT; that is, they are either from the Ethernet side to the wireless side, from the wireless side to the Ethernet side, or in both directions. Traffic flows that are purely Ethernet or purely wireless-side are outside the scope of this document. Note that wireless-to-wireless test metrics and procedures are already covered in IEEE 802.11.2.

Fully-meshed traffic topologies, per section 3.3.3 of RFC 2285 [RFC2285] (Mandeville, R., “Benchmarking Terminology for LAN Switching Devices,” February 1998.) are not applicable to wireless testing; virtually no unicast traffic is sent directly between wireless devices. Instead, one-to-many or many-to-one topologies (per section 3.3.2 of RFC 2285 [RFC2285] (Mandeville, R., “Benchmarking Terminology for LAN Switching Devices,” February 1998.)) are used. In addition, the partially-meshed one-to-many/many-to-one topology is commonly used for forwarding, latency, and QoS tests.

In the case of multicast traffic flows, the most common case is a traffic flow direction from Ethernet to wireless (i.e., downstream relative to the endstations) using a one-to-many topology. New applications such as push-to-talk for VoIP over WLAN handsets also require many-to-many traffic flow topologies; however, the tests described in this document do not yet cover such situations.



 TOC 

3.3.3.8.  Half-duplex effects

WLAN WTPs and endstations perform medium access in half-duplex mode, which can cause the actual offered load to be less than the intended load imposed by the tester. The tester MUST therefore adjust the inter-frame spacing according to the target intended load (i.e., to achieve the desired rate of frame transmission), and then MUST measure and report the actual offered load at the end of the trial.

Appendix A of this document provides some notes about generating the intended load for tests described herein. Either the frame-based or the time-based method described in Appendix B of RFC 2889 [RFC2889] (Mandeville, R. and J. Perser, “Benchmarking Methodology for LAN Switching Devices,” August 2000.) MAY be used, but, in either case, the method used MUST be reported with the results. Most of the tests in this document use a constant (non-bursty) load, and the Iload calculations in Section A.2 apply. Burst loads use the calculations in Section A.3.

The tester SHOULD note attempts by the DUT or SUT to violate the timing requirements of the 802.11 protocol by not conforming to the backoff rules, or by reducing its inter-frame spacing to less than the legal minimum.



 TOC 

3.3.3.9.  Wireless physical layer (PHY) settings

This section is only applicable to SUTs containing WTPs.

The physical layer of the 802.11 WLAN protocol supports data transfer (PHY data rate) at a number of different bit rates. For instance, the 2.4 GHz OFDM PHY layer (formerly referred to as 802.11g) uses bit rates of 1, 2, 5.5, 6, 9, 11, 12, 18, 24, 36, 48 and 54 Mb/s. These data rates are achieved with different modulation formats, generally resulting in different frame error ratios and/or signal-to-noise ratios at the receiver. Tests performed at one data rate may not correlate with tests performed at another rate. The test results MUST therefore record the data rate used by both the DUT or SUT and the tester. At least one trial SHOULD be performed at the highest PHY data rate supported by the DUT or SUT.

Rate adaptation is supported by 802.11 devices in order to transfer unicast data under changing signal conditions. (Multicast frames are transmitted by each WTP at a fixed default rate, which is usually the highest basic rate configured.) Devices can automatically fall back to lower PHY data rates to cope with decreasing signal-to-noise ratios. This can make test results impossible to reproduce or interpret in the case of measurements needing constant PHY data rates. A given trial MUST maintain a constant PHY data rate for all test data packets presented on the wireless side of the DUT or SUT, unless otherwise specified by the test.

The 802.11 WLAN protocol uses retransmissions to compensate for the relatively high frame error ratio on the wireless medium. This effectively trades goodput for reliable data transfer. Frame errors rise sharply as the wireless signal-to-noise ratio degrades. Hence the performance of the 802.11 wireless link is materially affected by the signal levels at the receivers of both the DUT or SUT and the tester. The test results therefore MUST report the average RSSI (Received Signal Strength Indication) measured at the wireless interfaces of the tester during the test.



 TOC 

3.3.3.10.  Security settings

WLAN DUTs/SUTs typically implement a wide variety of authentication and encryption protocols, and it is of interest to characterize their performance when using these protocols. Examples of authentication protocols include preshared key (PSK), EAP-TLS, PEAP, LEAP, EAP-MD5, EAP-MSCHAP and EAP-TTLS. Examples of encryption protocols include WEP-40, WEP-128, TKIP and AES-CCMP.

Transfer of data between stations in 802.11 is permitted to begin only after a successful connection setup, including capabilities negotiation, user authentication, and (usually) installation of encryption keys. Also, as per the IEEE 802.11 standard, data must not be transmitted after a connection has been terminated.

For data plane tests, the tester MUST authenticate the virtual or physical endstations used for each test with the DUT or SUT prior to the start of each trial, using the appropriate security method. It SHOULD perform a endstation connection handshake with the DUT or SUT at the start of each trial, and a deauthentication handshake at the conclusion. However, it MAY elect to remain connected with the DUT across multiple trials, in which case the deauthentication handshake can be performed at the end of the test.

Disassociation of a virtual or physical endstation by the DUT or SUT during the test data transfer portion of a trial MUST be reported and SHOULD cause the trial to be terminated. Further, the tester MUST NOT count as valid any unicast data frame from the DUT or SUT that arrives for a virtual or physical endstation when the latter is not completely authenticated.

For a number of control plane tests, the connection handshake forms an integral part of the test procedure, and the requirements for authentication and connection setup are described in the specific tests.

To provide a baseline, a full suite of tests SHOULD be run with no encryption and no authentication enabled.



 TOC 

3.3.3.11.  Multiple endstations

To characterize the context-handling capabilities of the DUT or SUT datapaths, measurements of frame loss, throughput, forwarding rate, latency and burst capacity SHOULD be performed with multiple endstations on either the wireless side, the wired side, or both. If used, the multiple endstations MUST be concurrently transmitting traffic.

This simulates the situation in an actual network, where multiple endstations and servers are connected to a single DUT or SUT and contribute to the total offered load. Each endstation is represented by a different MAC/IP address combination. The MAC and IP addresses of the different endstations SHOULD be chosen according to the preceding section on addressing.

The number of endstations used in the test MUST be reported. For throughput, frame loss, forwarding rate and burst capacity tests, the aggregate results for all of the endstations combined MUST be reported. For latency tests, the worst-case latency and smoothed interarrival jitter among all of the endstations MUST be reported.

To provide a baseline, a full suite of tests SHOULD be run with a single endstation on the wireless side and a single endstation on the Ethernet side.



 TOC 

3.3.3.12.  Multiple virtual WLANs

Virtually all enterprise WLAN infrastructure equipment allows the configuration of multiple overlay (or "virtual") WLANs on the wireless topology. These virtual WLANs are distinguished by different SSIDs being supported on each WTP and AC; on any given WTP, each SSID corresponds to a separate BSSID and has a separate beacon stream associated with it. The virtual WLANs serve the same purpose as VLANs in Ethernet networks, namely, segregation, security and management. Each virtual WLAN usually (but not necessarily) maps to a separate VLAN or subnet on the Ethernet side of the DUT or SUT.

The different virtual WLANs may have different security, QoS, endstation capacity, radio characteristics, bandwidth limits, and other parameters. The virtual WLAN configuration of the DUT or SUT therefore has a material impact on the performance results obtained.

In all cases, the baseline measurement MUST be made with a single virtual WLAN (i.e., a single SSID) configured on the DUT or SUT. Once the baseline has been measured, additional measurements MAY be carried out with more than one virtual WLAN being configured. The virtual WLAN setup (including security and QoS parameters) MUST be detailed in the test results.



 TOC 

3.3.3.13.  Mobility

The common case of mobility (roaming) is one or more endstations that move between WTPs within the same IP subnet, thus keeping the roaming process transparent to the IP protocol layer and above. However, many DUTs/SUTs allow WLAN endstations to roam between WTPs that are physically located on different subnets, without requiring the roaming endstation to obtain a new IP address via DHCP or other means. This 'inter-subnet roaming' is actually carried out by a proxying process; the WTP to which the endstation roams acts as a 'remote agent' for the WTP from which the endstation has roamed, with tunnels being used to transport the endstation's data packets back to the subnet to which it belongs.

Tests of mobility SHOULD therefore include both inter-subnet roaming as well as intra-subnet roaming. The two cases MUST be tested separately, and the results reported separately.

In addition, roaming may also be divided into intra-AC and inter-AC roaming scenarios (as described in the accompanying terminology document), provided that the DUT or SUT contains multiple ACs. There are hence four possible roaming test scenarios:

  • intra-subnet, intra-AC
  • intra-subnet, inter-AC
  • inter-subnet, intra-AC
  • inter-subnet, inter-AC

Of these four scenarios, the first (intra-subnet, intra-AC) MUST be tested as the baseline case. The other three scenarios SHOULD also be tested if the DUT or SUT supports the relevant functions. The results for each scenario MUST be reported separately.

Data plane tests MUST NOT involve roaming behavior (i.e., endstations moving between WTPs) because roaming causes irreproducible traffic delays and interruptions. Roaming behavior MUST occur only in the tests specifically indicated as roaming tests.

When conducting roaming tests, the tester MUST distinguish between the tester-imposed delay and the DUT or SUT imposed delay. For example, the time delay between the receipt of a connection handshake frame by the tester during the reassociation process, and the transmission of the corresponding response frame, is tester-imposed delay. The two kinds of delays MUST be reported separately, and the tester-imposed delay SHOULD be subtracted from the overall roaming delay when reporting the overall roaming delay of the DUT or SUT.

The tester MUST transmit learning frames after each roam event by an endstation. Learning frames are necessary to update the end-to- end datapath and cause data traffic to resume expeditiously. The time delay between the completion of the connection handshake and the first learning frame MUST NOT be counted as part of the roaming delay attributed to the DUT or SUT.

The 802.11 protocol contains numerous shortcuts and enhancements (e.g., preauthentication, PMKID caching, and opportunistic key caching) that are designed to speed up the roaming process. Most of these features are optional. The tester SHOULD implement as many of these optional features as possible, to enable the roaming performance of the DUT or SUT to be measured under different scenarios. Any optional speed-up features that have been used in a test MUST be reported with the results.



 TOC 

3.3.3.14.  Trial duration

The duration of each trial SHOULD be selected using the guidelines of Section 24 of RFC 2544 [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.). Further, it SHOULD be long enough to minimize any connection setup and startup effects that can affect the test results. In the case of tests involving WTPs (i.e., APs), the trial duration SHOULD also be long enough to make the random fluctuations of the CSMA/CA access method statistically insignificant.

The recommended duration of each trial is 60 seconds. The trial duration MAY be adjustable between 30 seconds and 300 seconds. The tester MUST transmit all test data frames within the trial duration. To eliminate the case where a device possessing large frame buffers can appear to be faster than it actually is, the tester MUST NOT accept received test data frames for more than 2 seconds beyond the end of the trial duration. (Thus a 60 second trial duration causes the tester to receive frames for no more than 62 seconds, starting from the beginning of the trial.) Frames received outside of these limits MUST NOT be counted as part of the results.

The trial duration MUST NOT include the time taken for initial connection setup and system state stabilization, unless this is specifically part of the test. For example, authentication and association of wireless clients can take a relatively long time; if this is included within the trial duration of a throughput test, the results will be significantly affected. This also applies to spanning tree convergence and other connection state settling delays.



 TOC 

4.  Interpreting and reporting test results

Test results SHOULD be reported in a common format to aid the reader in interpreting results and comparing them across DUTs. Results from a set of trials involving the variation of one or more test parameters described in Section 3 above SHOULD be presented as graphs, with the x coordinate being the parameter value and the y coordinate being the result. Detailed results SHOULD be presented in tabular format to simplify analysis.

The following test conditions MUST be reported with the results of each trial:

Tester and WTP signal level, PHY bit rate, PHY layer options, and channel used (if WTPs form part of the SUT).

Security modes (encryption and authentication).

Trial duration.

Number of endstations used in the test.

Frame size and offered load.



 TOC 

5.  Benchmarking tests

The following tests are divided into two categories: data plane tests and control plane tests.

Data plane tests relate to the performance of the traffic handling functions of the DUT or SUT; the results of such tests are mainly governed by the packet forwarding hardware and software. Control plane tests, on the other hand, stress the connection setup and context management capabilities of the DUT or SUT, and the results of these tests are dictated by the performance of the CPU(s) and front-end traffic classification and exception handling mechanisms.

The correlation between system performance and data plane metrics such as throughput and latency is well known, of course. The performance of the management and security protocols in a WLAN, however, are also key determinants of the perceived user experience. For example, support of 802.11-based VoIP handsets is of significant interest in an enterprise environment; however, poor roaming performance can make such handsets nearly unusable except in a fixed usage model. Thus it is important to quantify control plane metrics as well.

This section treats data plane and control plane tests separately. Objectives, test parameters, procedures and reporting formats are described for each test.



 TOC 

5.1.  Data plane tests

Data plane tests comprise the following:

Unicast and multicast throughput and forwarding rate

Latency and jitter

QoS differentiation

Burst capacity

Power-save throughput

Throughput and forwarding rate tests have an obvious correlation to end-user perceptions of the speed and capacity of the wireless LAN. For throughput and forwarding rate tests, either the Frame Based or Time Based modes of testing may be used, as described in Appendix B of RFC 2889 [RFC2889] (Mandeville, R. and J. Perser, “Benchmarking Methodology for LAN Switching Devices,” August 2000.). The DUT or SUT is initially set up according to the baseline configuration, using a starting combination of test parameters. Packets are then sent to the DUT or SUT by the tester at a specific offered load for the duration of the trial, and the number of frames received from the DUT or SUT are counted. The process MUST be iterated at different offered loads, using a search algorithm, until the desired measurement (throughput or maximum forwarding rate) has been made. Additional trials are then performed in the same manner using different DUT or SUT configurations until all configurations have been exhausted.

Latency and jitter tests are principally of interest in delay sensitive applications such as voice and video. Jitter tests use the smoothed interarrival jitter calculation method described in RFC 3550 [RFC3550] (Schulzrinne, H., Casner, S., Frederick, R., and V. Jacobson, “RTP: A Transport Protocol for Real-Time Applications,” July 2003.). A means of timestamping transmitted frames is required in order to calculate both latency and jitter.

QoS differentiation tests seek to quantify the performance of the DUT or SUT when handling traffic such as VoIP that requires preferential treatment over best-effort data traffic. This is particularly important in WLANs, not only because of the growing use of Voice over WLAN (VoWLAN) handsets, but also because of the limited bandwidth available over the wireless medium. (Wired enterprise LANs have heretofore dealt with the QoS problem by throwing bandwidth at the problem, as QoS is mainly an issue in oversubscribed links; due to spectrum limitations this pleasantly simple approach is not simple or practicable in a wireless context.)

Burst capacity and power-save throughput deal with the unique need for WLAN infrastructure devices to buffer considerable amounts of data, in order to compensate for unpredictable variations in the links to wireless endstations. The wireless media may experience rapid variations in capacity due to rate adaptation, requiring the infrastructure devices to buffer packets until higher- layer protocols such as TCP can 'catch up'. In addition, wireless endstations themselves may elect to enter a low-power mode in order to extend battery life, again requiring the WLAN infrastructure to buffer packets until the endstation wakes up and calls for the data. As the loss of even a few packets has a substantial impact on upper layer protocols such as TCP, the burst capacity and power-save throughput of the DUT or SUT can significantly affect the end-user experience.

In all of the data plane tests, the tester MUST count as valid received test frames only those which it receives without error within the testing time window, with the proper signature, and correctly directed (i.e., having the right combination of source and destination addresses, frame length and payload). All other frames (including management/control frames) MUST NOT be included when computing the test results.

For the purposes of computing the actual offered load, the tester MUST count as valid transmitted frames only those test frames that were acknowledged by the DUT on the wireless medium (i.e., with an 802.11 ACK frame), or transferred to the DUT or SUT without a locally detected error on the wired medium within the testing time window. All other frames MUST NOT be counted as part of the offered load. Note that this is only applicable to tests involving WTPs.

In addition, in tests involving WTPs the tester must only count as valid those unique data frames for which it sent an 802.11 ACK frame to the DUT or SUT in response. It MUST NOT count duplicate frames, frames originating from the DUT, data frames that it did not acknowledge, or management and control frames as part of the measurements. Such frames MAY be counted separately for diagnostic purposes, or not counted at all.



 TOC 

5.1.1.  Unicast throughput



 TOC 

5.1.1.1.  Objective

To determine the throughput of the DUT or SUT when forwarding unicast data frames between the wireless and the wired sides of the DUT or SUT. The results of this test can be used to determine the ability of an AC to support multiple wireless endstations transferring data to a wired LAN segment.

Note that while the IEEE 802.11 wireless medium has a high frame error ratio relative to wired media, the low-level acknowledgement and retransmission protocol implemented by the 802.11 MAC effectively results in nearly zero loss as seen by the IP layer. In fact the loss ratio at the MAC service interface of an 802.11 device is no worse than the loss ratio for a wired Ethernet device. (Obviously a high frame error ratio on the wireless medium will then manifest itself as a reduced throughput at the MAC service interface, but the signal level precautions described in section 3.4.8 will ensure that the maximum possible throughput is obtained.)

The general setup for the test comprises one or more virtual or physical endstations on the wireless side of the DUT or SUT that transfer data to or from one or more virtual endstations on the wired side.



 TOC 

5.1.1.2.  Test parameters

The following configuration parameters MUST be established prior to each trial, as per the requirements in section 3 above:

Frame size, Flow direction, Number of wireless and Ethernet endstations, Fragmentation, RTS/CTS usage, Security mode and Number of virtual WLANs.

The baseline DUT or SUT configuration for performing this test consists of: frame sizes as per section 3.4.2, downstream transfer direction, a single wireless endstation, a single Ethernet endstation, fragmentation off, RTS/CTS disabled, security not used, and a single virtual WLAN.



 TOC 

5.1.1.3.  Procedure

The DUT or SUT is first set up according to the baseline configuration, using the initial setting of frame size. The initial offered load is computed per Appendix A to equal the aggregate theoretical maximum capacity of all the wireless-side links. For bidirectional tests involving WTPs, the tester MUST follow the half-duplex test conditions described in Section 3.4.7. The throughput is then measured as described below. The measurement is repeated for each value of frame size.

The tester MUST send learning frames (after endstation connection setup as applicable) to allow the DUT or SUT to update its address tables properly. A search algorithm is used to determine the throughput.

After the baseline configuration has been tested, the tester MAY repeat the process with a new configuration, until the desired number of different configurations have been exercised.

In tests involving multiple endstations (either on the wireless or the wired side, or both), the tester MUST ensure a uniform distribution of frames from each source endstation. In addition, all permissible combinations of source and destination addresses (consistent with the traffic direction setting) MUST be represented equally within each trial. This distributes the load of transmission and reception uniformly among the endstations. Note that this corresponds closely to the partially-meshed one-to-many/many-to-one topology described in RFC 2889 [RFC2889] (Mandeville, R. and J. Perser, “Benchmarking Methodology for LAN Switching Devices,” August 2000.).



 TOC 

5.1.1.4.  Analysis and reporting

The throughput of the DUT or SUT is computed and reported (per Section 26.1 of RFC 2544 [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.)) as the maximum offered load, in frames per second, resulting in zero frame loss rate [RFC1242] (Bradner, S., “Benchmarking terminology for network interconnection devices,” July 1991.).

The test results SHOULD be reported as graphs of throughput versus frame size. Separate results MUST be reported per configuration.



 TOC 

5.1.2.  Unicast maximum forwarding rate and frame loss ratio



 TOC 

5.1.2.1.  Objective

To determine the maximum rate at which the DUT or SUT can forward unicast data frames between the wireless and the wired sides of the DUT or SUT, irrespective of frame loss. This is a measure of the peak capacity of the DUT or SUT datapath, and is especially useful in conjunction with traffic such as UDP. The frame loss ratio is also measured under the same conditions.

The general setup for the test comprises one or more virtual or physical endstations on the wireless side of the DUT or SUT that transfer data to or from one or more virtual endstations on the wired side.



 TOC 

5.1.2.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Frame size, Flow direction, Number of wireless and Ethernet endstations, Fragmentation, RTS/CTS usage, Security mode and Number of virtual WLANs.

The baseline DUT or SUT configuration for performing this test consists of: frame sizes as per section 3.4.2, downstream transfer direction, a single wireless endstation, a single Ethernet endstation, fragmentation off, RTS/CTS disabled, security not used, and a single virtual WLAN.



 TOC 

5.1.2.3.  Procedure

The DUT or SUT is first set up according to the baseline configuration, using an initial frame size. The starting offered load is computed per Appendix A. The maximum forwarding rate and frame loss ratio are then measured as described below. The measurements are repeated for each value of frame size.

The tester MUST send learning frames (after endstation connection setup as applicable) to allow the DUT or SUT to update its address tables properly. A search algorithm SHOULD be used to determine the maximum forwarding rate.

After the baseline configuration has been tested, the tester MAY repeat the process with a new configuration, until the desired number of different configurations have been exercised.

In tests involving multiple endstations (either on the wireless or wired side, or both), the tester MUST ensure a uniform distribution of frames from each source endstation. In addition, all permissible combinations of source and destination addresses (consistent with the traffic direction setting) MUST be represented equally within each trial. This distributes the load of transmission and reception uniformly among the endstations. Note that this corresponds closely to the partially-meshed one-to-many/many-to-one topology described in RFC 2889 [RFC2889] (Mandeville, R. and J. Perser, “Benchmarking Methodology for LAN Switching Devices,” August 2000.).



 TOC 

5.1.2.4.  Analysis and reporting

The maximum forwarding rate of the DUT or SUT is computed and reported as the maximum number of test frames per second that the DUT or SUT is observed to successfully forward, irrespective of frame loss, at some value of offered load. The offered load applied to the DUT or SUT at the maximum forwarding rate MUST be reported as well.

The frame loss ratio MUST be reported with the maximum forwarding rate, as the percentage of frames that were successfully injected into the DUT or SUT by the tester but not forwarded by the DUT or SUT to the tester for any reason.

The test results SHOULD be reported as a graph of maximum forwarding rate versus frame size. Separate results MUST be reported per configuration.

If the maximum forwarding rate of a SUT (containing WTPs) exceeds the theoretical maximum medium capacity of the wireless LAN medium, then the SUT is departing from the DCF contention behavior specified by the IEEE 802.11 MAC protocol. In this case, Forward Pressure (as defined in 3.7.2 of RFC 2285 [RFC2285] (Mandeville, R., “Benchmarking Terminology for LAN Switching Devices,” February 1998.)) has been detected, and MUST be highlighted in the test results. The calculation of theoretical maximum medium capacity MUST account for the effects of QoS settings, if QoS is enabled.

Note that the wired-side interfaces of the DUT or SUT are often capable of much higher link rates than the wireless side, potentially leading to extremely high frame loss rates when oversubscription occurs on the wired interfaces. Care should be taken to allow enough time for SUTs that include WTPs to recover and return to a normal state between trials.



 TOC 

5.1.3.  Multicast forwarding rate



 TOC 

5.1.3.1.  Objective

To determine the maximum rate at which the DUT or SUT can forward multicast data frames. As multicast (or broadcast) traffic is dealt with differently from unicast traffic by the 802.11 protocol, this test therefore determines the ability of the DUT or SUT to handle such traffic.

This test is run only in a downstream (wired to wireless) direction, as wireless endstations do not normally generate high volumes of multicast data. The general setup comprises at least one source (virtual or physical endstation) on the wired side of the DUT or SUT that injects multicast data destined for the wireless side, as well as at least one virtual or physical endstation on the wireless side that acts as a recipient for multicast traffic.

Note that the 802.11 protocol does not make special provisions for multicast versus broadcast traffic. A single test is thus sufficient to measure the ability of DUTs/SUTs to handle both types of data.



 TOC 

5.1.3.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Frame size, Number of wireless endstations (multicast sinks), Number of Ethernet endstations (multicast sources), Security mode and Number of virtual WLANs.

The baseline DUT or SUT configuration for performing this test consists of: frame sizes as per 3.4.2, a single wireless endstation, a single Ethernet endstation, fragmentation off, RTS/CTS disabled, security not used, and a single virtual WLAN.



 TOC 

5.1.3.3.  Procedure

The DUT or SUT is first set up according to the baseline configuration, using an initial value of frame size. The starting offered load is computed per Appendix A to equal the theoretical maximum capacity of the wireless-side link with the lowest PHY bit rate. The maximum multicast forwarding rate is then measured as described below. The measurements are repeated for each value of frame size.

A trial is considered to be successful if each target (wireless) endstation receives at least 50% of the multicast frames expected to be received during that trial. For example, if 1000 multicast frames are transmitted during a trial, and there are 8 wireless endstations configured on 8 WTPs, then each wireless endstation must receive at least 500 multicast frames for the trial to report that the injected frames were successfully forwarded.

After the baseline configuration has been tested, the tester MAY repeat the process with a new configuration, until the desired number of different configurations have been exercised.

The target multicast addresses (MAC and IP) MUST be configured as described in section 3.4.5 above. The addresses used MUST be reported with the results.



 TOC 

5.1.3.4.  Analysis and reporting

The maximum multicast forwarding rate of the DUT or SUT is computed and reported as the maximum number of test frames per second that the DUT or SUT is observed to successfully forward, irrespective of frame loss, at some value of offered load. The offered load applied to the DUT or SUT at the maximum forwarding rate MUST be reported as well. A search algorithm SHOULD be used to determine the maximum multicast forwarding rate.

The worst-case frame loss ratio MUST be reported along with the maximum multicast forwarding rate, as the percentage of frames that were injected into the DUT or SUT by the tester, but not forwarded by the DUT or SUT to the tester for any reason. The worst-case frame loss ratio is obtained by calculating the frame loss ratio at each target wireless endstation, and taking the maximum.

The test results SHOULD be reported as a graph of maximum forwarding rate versus multicast frame size. Separate results MUST be reported per configuration.

Note that the wired interfaces of the DUT or SUT are often capable of much higher link rates than the wireless interfaces, potentially leading to extremely high frame loss rates when transferring multicast frames to the wireless media. Care should be taken to allow enough time for the DUT or SUT to recover and return to a normal state between trials.



 TOC 

5.1.4.  Latency and jitter



 TOC 

5.1.4.1.  Objective

To determine the latency and latency variation (also known as jitter) exhibited by the DUT or SUT when forwarding unicast data frames between the wired and wireless sides of the DUT or SUT. The results of this test can be used to estimate the impact of the DUT or SUT on delay-sensitive traffic to/from a wireless endstation such as a VoIP handset.

The general setup for the test comprises one or more virtual or physical endstations on the wireless side of the DUT or SUT that transfer data to/from one or more virtual endstations on the wired side.



 TOC 

5.1.4.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Frame size, Flow direction, Number of wireless and Ethernet endstations, Fragmentation, RTS/CTS usage, Security mode and Number of virtual WLANs.

The baseline DUT or SUT configuration for performing this test consists of: frame sizes as per 3.4.2, downstream transfer direction, a single wireless endstation, a single Ethernet endstation, fragmentation off, RTS/CTS disabled, security not used, and a single virtual WLAN.



 TOC 

5.1.4.3.  Procedure

The DUT or SUT is initially set up according to the baseline configuration, and data are transmitted to it by the tester at a constant load for the duration of the trial. The offered load presented to the DUT or SUT MUST be less than or equal to the measured throughput of the DUT or SUT, and SHOULD be set to 90% of its unicast throughput as measured under the same test conditions and with the same configuration parameters.

The latency and jitter introduced by the DUT or SUT are measured over the entire trial duration, as described below. An identifying tag or signature MUST be placed in each data frame sent to the DUT or SUT during the measurement interval, so that it can be correlated with the frames received from the DUT. The measurements are repeated for each value of frame size.

The tester MUST send learning frames (after endstation connection setup as applicable) to allow the DUT or SUT to update its address tables properly. If multiple endstations are used, the traffic topology MUST be of the partially meshed one-to-many/many-to-one type.

After the baseline configuration has been tested, the tester MAY repeat the process with a new configuration, until the desired number of different configurations have been exercised.

When testing with multiple endstations on the wireless side and/or Ethernet side, consecutive frames transmitted by the tester to the DUT or SUT MUST have different combinations of source and destination addresses, and all possible such combinations of addresses MUST be represented equally within each trial. This distributes the delay impact and traffic load uniformly among the endstations. Failure to ensure this can lead to inconsistent results.



 TOC 

5.1.4.4.  Analysis and reporting

The instantaneous latency of the DUT or SUT is measured (per section 26.2 of RFC 2544 [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.)) as the difference, in seconds, between the timestamps assigned to a frame transmitted to the DUT or SUT and the corresponding frame received from the DUT. The minimum, maximum and mean of these differences in timestamps over all the data frames received from the DUT or SUT during the trial duration are computed and reported as the average latency introduced by the DUT.

The smoothed interarrival jitter introduced by the DUT or SUT is calculated over the entire trial duration according to the algorithm in section 6.4.1 of RFC 3550 [RFC3550] (Schulzrinne, H., Casner, S., Frederick, R., and V. Jacobson, “RTP: A Transport Protocol for Real-Time Applications,” July 2003.). (See Appendix A.8 of this document for an example.)

The offered load over the trial duration MUST be reported as well.

The test results SHOULD be reported as graphs of latency and jitter versus frame size. Separate results MUST be reported per configuration.



 TOC 

5.1.5.  QoS differentiation



 TOC 

5.1.5.1.  Objective

The limited capacity of the wireless medium makes it imperative to implement effective QoS schemes in order to successfully support delay and loss sensitive traffic such as voice and video. Many ACs and WTPs implement extensive classification and prioritization functions to ensure that high levels of best-effort data traffic do not adversely impact the delay, jitter and packet loss of higher priority real-time traffic.

This test therefore seeks to quantify the level to which the DUT or SUT isolates real-time traffic from best-effort data traffic that is sharing the same channel. This is done by injecting progressively higher levels of best-effort traffic into the DUT or SUT until the QoS requirements of a previously established real-time stream are no longer met. Ideally, the DUT or SUT will not permit the best-effort traffic stream to affect the real-time stream, regardless of the offered load of the former. A poor QoS implementation will result in the QoS requirements of the real-time stream being violated at relatively low levels of best-effort traffic. The results of this test can hence be used to estimate the effectiveness of the QoS implementation within the DUT or SUT datapaths.

The general setup for the test comprises one or more virtual or physical endstations on the wireless side of the DUT or SUT that transfer data to/from one or more virtual endstations on the wired side. Traffic flows of different types are injected in order to exercise the QoS handling functions of the DUT. This test is only applicable to DUTs or SUTs that are capable of recognizing and prioritizing real-time traffic.



 TOC 

5.1.5.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Frame size (for best-effort data), Number of wireless and Ethernet endstations, Fragmentation, RTS/CTS usage, Security mode and Number of virtual WLANs.

The baseline DUT or SUT configuration for performing this test consists of: frame sizes as per 3.4.2, a single wireless endstation, a single Ethernet endstation, fragmentation off, RTS/CTS disabled, security not used, and a single virtual WLAN.



 TOC 

5.1.5.3.  Procedure

The DUT or SUT is initially set up according to the baseline configuration, and the wireless endstations are associated with it. A constant bidirectional stream of real-time traffic is injected into the DUT or SUT (i.e., one stream from the wireless side to the Ethernet side and an identical stream from the Ethernet side to the wireless side).

The real-time traffic frames MUST be formatted to ensure that the DUT or SUT assigns them a higher priority than normal best-effort data frames. The frame size and frame rate of the real-time traffic MUST resemble a single G.711 voice stream (240 byte RTP payloads, 30 frames/second) as closely as possible. The maximum latency, smoothed interarrival jitter and packet loss MUST be measured for each direction of each real-time traffic stream over the entire trial duration.

The tester MUST then inject a stream of best-effort data traffic into the Ethernet side of the DUT or SUT, directed at the wireless endstations, using an initial frame size and starting offered load (computed per Appendix A), and running for the entire trial duration. The tester MUST monitor the QoS parameters of the real-time traffic over the trial duration. A search algorithm SHOULD be used to find the highest value of best-effort offered load (up to and including the theoretical maximum capacity of the Ethernet medium, minus the bandwidth occupied by the real-time traffic) for which the QoS threshold is not violated.

The measurements are repeated for each value of frame size. Identifying tags or signatures MUST be placed in each frame sent to the DUT or SUT during the measurement interval, so that the real-time and best-effort data frames can be distinguished from each other and correlated with the frames received from the DUT.

The tester MUST send learning frames (after endstation connection setup as applicable) to allow the DUT or SUT to update its address tables properly. If multiple endstations are used, the traffic topology MUST be of the partially meshed one-to-many/many-to-one type.

After the baseline configuration has been tested, the tester MAY repeat the process with a new configuration, until the desired number of different configurations have been exercised.

When testing with multiple endstations on the wireless side and/or Ethernet side, consecutive best-effort data frames transmitted by the tester to the DUT or SUT MUST have different combinations of source and destination addresses, and all possible such combinations of addresses MUST be represented equally within each trial. This distributes the delay impact and traffic load uniformly among the endstations. Failure to ensure this can lead to inconsistent results.



 TOC 

5.1.5.4.  Analysis and reporting

The QoS differentiation capability of the DUT or SUT is reported as the maximum aggregate offered load, in frames/second, of best-effort data that can be presented to the DUT or SUT without causing the QoS threshold to be violated. The QoS threshold used MUST be reported as well.

The test results SHOULD be reported as graphs of maximum aggregate offered load versus frame size. Separate results MUST be reported per configuration.



 TOC 

5.1.6.  Power-save throughput



 TOC 

5.1.6.1.  Objective

To measure the ability of the DUT or SUT to support mobile endstations in power management mode. Endstations in power management mode go into a sleep state (including turning off their radios) frequently, in order to save battery power.

WLAN infrastructure devices that support wireless endstations in power management mode (i.e., sleeping endstations) are required to accept and buffer frames on behalf of these endstations and signal the endstations that frames are being buffered for them. The sleeping endstations will wake periodically at a pre-configured listen interval and check for buffered frames, going back to sleep only after transferring the buffered frames (if any). Buffer management and notification functions must therefore be efficiently implemented in the DUT or SUT in order to sustain a large population of endstations in power management mode.

This test measures the power management mode throughput of the DUT or SUT, and hence its ability to efficiently support many connected but sleeping endstations.



 TOC 

5.1.6.2.  Test parameters

The following parameters are relevant to this test, and MUST be configured as specified in Section 3.5.14:

Frame size, Listen interval, Number of wireless and Ethernet endstations, Fragmentation, RTS/CTS usage, Security mode and Number of virtual WLANs.

The listen interval SHOULD NOT exceed the frame aging time of the DUT or SUT, and SHOULD be kept the same for all trials within a given configuration. If desired, the following values (in units of beacon periods) MAY be used for the listen interval:

2, 4, 6, 8, 10

The baseline DUT or SUT configuration for performing this test consists of: frame sizes as per 3.4.2, listen interval of 4 beacon periods, a single wireless endstation, a single Ethernet endstation, fragmentation off, RTS/CTS disabled, security not used, and a single virtual WLAN.



 TOC 

5.1.6.3.  Procedure

The DUT or SUT is initially set up according to the baseline configuration. The tester then associates the required number of wireless endstations with the DUT or SUT and causes these endstations to immediately enter power-save (sleep) mode. The tester MUST transmit learning frames to and from the endstations, and MUST then wait a sufficient length of time to ensure that the endstations have entered power-save mode successfully.

The tester then transmits test data frames at a starting offered load (computed according to Appendix A) to the Ethernet side of the DUT or SUT for forwarding to each of the sleeping endstations. The initial offered load SHOULD be set to the aggregate theoretical maximum capacity of all of the wireless-side links. A search algorithm is then used to determine the throughput. The measurement is repeated for each value of frame size.

After the baseline configuration has been tested, the tester MAY repeat the process with a new configuration, until the desired number of different configurations have been exercised.

In tests involving multiple endstations (either on the wireless or wired side, or both), the tester MUST ensure a uniform distribution of frames from each source endstation. In addition, all permissible combinations of source and destination addresses (consistent with the traffic direction setting) MUST be represented equally within each trial. This distributes the traffic and power-save buffer load uniformly among the endstations. Note that this corresponds closely to the partially-meshed one-to-many/many-to-one topology described in RFC 2889 [RFC2889] (Mandeville, R. and J. Perser, “Benchmarking Methodology for LAN Switching Devices,” August 2000.).



 TOC 

5.1.6.4.  Analysis and reporting

The power-save throughput of the DUT or SUT is computed and reported (per Section 26.1 of RFC 2544 [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.)) as the maximum offered load, in frames per second, resulting in zero frame loss rate [RFC1242] (Bradner, S., “Benchmarking terminology for network interconnection devices,” July 1991.).

The test results SHOULD be reported as graphs of throughput versus frame size. Separate results MUST be reported per configuration.



 TOC 

5.2.  Control plane tests

Control plane tests comprise the following:

Endstation roaming delay

Endstation roaming rate

Endstation association rate

Endstation capacity

WTP capacity

Reset recovery time

Failover recovery time

The endstation roaming delay and rate tests directly measure the capacity of the DUT or SUT to support mobility on the part of the endstations, and are generally considered to be important for enterprise networks. The test results can, for example, indicate whether VoIP handsets can roam from WTP to WTP without causing a loss in voice quality (due to dropouts or lost voice samples). It is generally considered that the roaming delay should be considerably less than the time between two or three VoIP packets to avoid having a material impact on voice traffic.

The endstation association rate and capacity tests attempt to quantify the ability of the DUT or SUT to efficiently support large numbers of connected endstations. Each endstation is represented by a significant amount of state maintained within the AC (and sometimes even the WTPs). The implementation of efficient state management and update algorithms is necessary in order to avoid issues such as slow rates of endstation connection or even outright disconnection of successfully associated endstations when new endstations attempt to connect.

The WTP capacity test is applicable to ACs only, and measures the scalability of the DUT. Modern enterprise WLANs may require hundreds or even thousands of WTPs to be deployed to provide adequate coverage. This test therefore quantifies the size of the WLAN that may be practically constructed using the DUT.

Reset recovery and failover recovery measurements are essential indicators of the robustness, uptime and stability of a WLAN that is deployed using the DUT or SUT. Rapid recovery from catastrophic events such as equipment failure or power glitches is a key requirement for an enterprise network carrying critical data. A long reset recovery time can cause timeouts and connection drops at the transport and application layers, and may even cause endstations to be disconnected.

Data traffic streams in control plane tests are used principally as indicators of successfully completed events, handshakes or trials. The frame sizes, flow rates and flow directions of data traffic do not particularly affect the test results. Hence the test parameter specifications for control plane tests in this document generally fix these at convenient values.



 TOC 

5.2.1.  Endstation roaming delay



 TOC 

5.2.1.1.  Objective

The 802.11 protocol enables a endstation to dynamically disassociate itself from one WTP and reassociate with another WTP in the same domain. This is done to facilitate the mobility (or roaming) of endstations within an extended region that constitutes a single logical LAN covered by multiple WTPs. The time required for endstations to transition from one WTP to another plays a large role in the perceived quality and reliability of the mobile system. Long roaming delays can result in lost data and dropped connections. This test seeks to determine the delay experienced by endstations when roaming between WTPs belonging to the DUT or SUT.

In 802.11 networks, endstations are the primary drivers of roaming behavior, and actually initiate the decision to roam. WTPs and ACs, are significant contributing factors to the total roaming delay, in terms of the time required for them to accept and complete connection and security handshakes and resume traffic flow to and from the endstation; however, endstations also contribute to some of the delays. Endstation and WTP roaming time contributions are thus measured and reported separately.

This test MUST be carried out on a DUT or SUT that involves two or more physical interfaces on the wireless side; if the SUT includes WTPs, then two or more WTPs MUST be present.



 TOC 

5.2.1.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Number of wireless endstations, Roaming rate, RTS/CTS usage, Security mode, DHCP mode and Number of virtual WLANs.

The roaming rate SHOULD be varied between 0.1 and 10 roams per second. The following discrete values of roaming rate MAY be used:

0.1, 0.2, 1, 5, 10

The baseline DUT or SUT configuration for performing this test consists of: a single wireless endstation, roaming rate of 0.2 roams/second, RTS/CTS disabled, security not used, DHCP not used, and a single virtual WLAN.

Several security enhancements such as preauthentication and PMKID caching are implemented by vendors in order to speed up roaming. If these enhancements are available, additional trials MAY be run after the baseline configuration in order to quantify the effect of these enhancements.



 TOC 

5.2.1.3.  Procedure

The tester connects the wireless endstations with the DUT or SUT in a uniformly distributed manner (i.e., each physical interface or WTP is presented with the same number of endstations). It then injects a continuous stream of traffic into the Ethernet side of the DUT or SUT that is destined for the wireless endstations. The traffic distribution MUST be uniform, i.e., the same offered load is set up for all of the wireless endstations.

The frame size of the injected traffic MUST be set as per section 3.4.3. The aggregate injected traffic load, as observed at any wireless interface, MUST NOT exceed 50% of the theoretical maximum capacity of that interface, to avoid seriously hampering the ability of the DUT or SUT to transfer the management frames that are exchanged during roaming handshakes. Note that the resolution of the roaming delay measurement is inversely proportional to the frame rate of the injected traffic, and hence MUST be calculated and reported with the test results.

The tester then causes the wireless endstations to roam from interface to interface (from WTP to WTP, if WTPs are present in the SUT), and measures the roaming delay. After the trial duration has expired, the traffic is stopped and the minimum, maximum and average roaming delay contribution of the DUT or SUT, the average number of lost packets per roam, and the number of failed roams are measured and reported. The tester SHOULD report these results on a per- virtual-WLAN basis, and MAY report the results on a per-endstation basis as well.

As described in Section 3.4.12, the test MUST be performed with a baseline DUT or SUT setup of intra-subnet, intra-AC roaming.

The DUT or SUT configuration parameters are initially set up and tested according to the baseline. After the baseline configuration has been tested, the tester MAY repeat the process with a new set of configuration parameters, until the desired number of different configurations have been exercised. In particular, inter-subnet and/or inter-AC roaming situations SHOULD be tested. If these situations are tested, each situation SHOULD be tested with the same combination of test parameters, to enable the results to be compared.

The trial duration MUST be set to allow every endstation to roam at least once, and SHOULD be set to allow every endstation to perform a complete circuit of all of the interfaces and return to the starting point. Note that with a large number of endstations, this may require a fairly long trial duration.



 TOC 

5.2.1.4.  Analysis and reporting

The roaming delay contribution of the DUT or SUT is measured by subtracting the roaming delay contribution of the endstation from the overall roaming delay, and is expressed in seconds. The roaming delay contribution of the endstation is calculated by adding up all the delays (excluding packet transmission delays) incurred by the endstation due to internal processing, during the process of moving from one interface/WTP to another interface/WTP.

The average number of lost packets per roam is calculated by subtracting the total number of data traffic packets received by the endstations from the total number of data traffic packets sent to the endstations, and dividing by the total number of roams performed.

The number of failed roams is measured as the number of times that the DUT or SUT failed to begin transferring data packets to a endstation after the endstation completed the roaming process.

The results SHOULD be reported in tabular format. Separate results MUST be reported per DUT or SUT configuration and test scenario (i.e., intra/inter-subnet and intra/inter-AC). The results MAY be also reported as profiles (a graph over time) to enable better understanding of the roaming behavior of the DUT or SUT.



 TOC 

5.2.2.  Endstation roaming rate



 TOC 

5.2.2.1.  Objective

In addition to the time required for an endstation to transition between WTPs (endstation roaming delay), the rate at which endstations can transition from one WTP to another also plays a part in the perceived responsiveness and reliability of a deployed WLAN. If the DUT or SUT is incapable of sustaining high roaming rates, then momentary periods of high mobility (e.g., a number of users congregating in a conference room) can cause issues such as dropped connections or handset calls. This test thus seeks to quantify the maximum rate at which the DUT or SUT can support roaming functions.

This test MUST be carried out on a DUT or SUT that involves two or more physical interfaces on the wireless side; if the SUT includes WTPs, then two or more WTPs MUST be present.



 TOC 

5.2.2.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Number of wireless endstations, RTS/CTS usage, Security mode, DHCP mode and Number of virtual WLANs.

The baseline DUT or SUT configuration for performing this test consists of: a single wireless endstation, RTS/CTS disabled, security not used, DHCP not used, and a single virtual WLAN.

Several security enhancements such as preauthentication and PMKID caching are implemented by vendors in order to speed up roaming, and these enhancements frequently improve the roaming rate measurements as well. If such enhancements are available, additional trials MAY be run after the baseline configuration in order to quantify the effect of these enhancements.



 TOC 

5.2.2.3.  Procedure

The tester connects the wireless endstations with the DUT or SUT in a uniformly distributed manner (i.e., each physical interface or WTP is presented with the same number of endstations). It then injects a continuous stream of traffic into the Ethernet side of the DUT or SUT that is destined for the wireless endstations. The traffic distribution MUST be uniform, i.e., the same offered load is set up for all of the wireless endstations.

The frame size of the injected traffic MUST be set as per section 3.4.3. The aggregate traffic load at any wireless interface MUST NOT exceed 50% of the theoretical maximum capacity of that interface, to avoid seriously hampering the ability of the DUT or SUT to transfer the management and control frames that are exchanged during roaming handshakes.

The tester then causes the wireless endstations to roam from interface to interface (from WTP to WTP, if WTPs are present in the SUT) at a constant rate for the duration of the trial. Roam events MUST be triggered in a serial fashion, i.e., only one roam is initiated at a time, but the roam processes for different endstations MAY overlap in order to establish the desired roaming rate. After the trial duration has expired, the traffic is stopped and the number of failed roams are measured. A search algorithm is used to determine the maximum rate at which roams can be initiated without causing any failed roams. A failed roam is counted either when the reconnection handshake terminates unsuccessfully, or when the DUT or SUT fails to begin transferring data packets to an endstation after the endstation completes the roaming process.

As described in Section 3.4.12, the test MUST be performed with a baseline DUT or SUT setup of intra-subnet, intra-AC roaming.

The DUT or SUT configuration parameters are initially set up and tested according to the baseline. After the baseline configuration has been tested, the tester MAY repeat the process with a new set of configuration parameters, until the desired number of different configurations have been exercised. In particular, inter-subnet and/or inter-AC roaming situations SHOULD be tested. If these situations are tested, each situation SHOULD be tested with the same combination of test parameters, to enable the results to be compared.

The trial duration MUST be set to allow every endstation to roam at least once, and SHOULD be set to allow every endstation to perform a complete circuit of all of the interfaces and return to the starting point. Note that with a large number of endstations, this may require a fairly long trial duration.



 TOC 

5.2.2.4.  Analysis and reporting

The roaming rate supported by the DUT or SUT is measured and expressed as the number of roaming events that can be presented to the DUT or SUT per second without failures.

The results SHOULD be reported in tabular format. Separate results MUST be reported per DUT or SUT configuration and test scenario (i.e., intra/inter-subnet and intra/inter-AC).



 TOC 

5.2.3.  Endstation association rate



 TOC 

5.2.3.1.  Objective

The 802.11 protocol requires that an infrastructure endstation wishing to communicate must first authenticate and associate itself with a WTP, performing all the necessary security, ARP and DHCP functions in the process. The rate at which these functions can be carried out impacts the time taken for a wireless LAN to recover from faults and transient conditions, such as a WTP being reset or a group of endstations being turned on concurrently.

The objective of this test is hence to determine the rate at which the DUT or SUT can associate endstations.



 TOC 

5.2.3.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Number of wireless endstations, RTS/CTS usage, Security mode, DHCP mode and Number of virtual WLANs.

In addition, the following test parameter MUST be configured to be the same for all trials:

Association Timeout - The tester MUST wait a predetermined amount of time for the DUT or SUT to complete all of the handshakes required for connection setup. If the DUT or SUT fails to complete the connection process within this time, the association attempt MUST be considered to have failed, and the connection attempt MUST be restarted.

Association Retry Limit - The tester MUST limit the number of times that the connection attempts for each endstation are repeated (on failure) before giving up and reporting the endstation as having failed to associate.

The association timeout and association retry limit used by the tester MUST be reported with the test results.

The baseline DUT or SUT configuration for performing this test consists of: a single wireless endstation, association timeout of 1 second, association retry limit of 1, RTS/CTS disabled, security not used, DHCP not used, and a single virtual WLAN.



 TOC 

5.2.3.3.  Procedure

The DUT or SUT is first set up according to the baseline configuration. The tester then presents the required number of virtual or physical test endstations to the DUT or SUT for connection, and measures the rate at which the DUT or SUT successfully completes associations. The tester MUST pipeline the associations (i.e., begin the association of the next endstation before the previous endstation has been fully associated) in order to present the DUT or SUT with as high a load as possible. The tester MUST record the actual average rate at which new endstations were presented over the course of the trial period.

If all of the endstations presented to the DUT or SUT fail to associate, then the tester MUST deauthenticate the associated endstations, reduce the association rate, and repeat the trial. If all of the endstations succeed in associating, the tester MUST increase the association rate and repeat the trial. The process continues until the maximum association rate is found. A search algorithm SHOULD be used to speed up the process.

It is recommended that the authentication and association database capacity test in Section 5.2.3 be performed first to determine the maximum number of endstations that can successfully associate with the DUT. The number of virtual endstations presented to the DUT SHOULD be kept below this number.

After the test endstations successfully authenticate and associate with the DUT or SUT, the tester MUST verify that these endstations have indeed been associated by transmitting test data frames to the endstations, and ensure that these data frames are correctly forwarded by the DUT or SUT. The rate at which verification data frames are transmitted to the DUT or SUT MUST be well below the theoretical maximum capacity of the links. The tester MUST ensure that at least one data frame directed to each endstation is forwarded.

If the DUT or SUT deauthenticates or disassociates one or more endstations during the data transfer phase, these MUST be counted as association failures. If none of the test data frames transmitted to a endstation are forwarded successfully, this MUST be treated as a verification failure. If failures do occur, the tester SHOULD attempt to find a lower rate of association for which no verification failures are found for all of the test endstations.

After the baseline configuration has been tested, the tester MAY repeat the process with a new configuration, until the desired number of different configurations have been exercised. If the DUT or SUT supports a built-in DHCP server, at least one of the configurations tested SHOULD include endstations receiving their addresses via DHCP. (Provisioning IP addresses via DHCP is a very common situation in WLANs.)

After each trial has been completed, the tester MUST remove the test endstation associations from the DUT or SUT database by performing the 802.11 deauthentication procedure for each endstation.



 TOC 

5.2.3.4.  Analysis and reporting

The endstation association rate of the DUT or SUT is computed and reported as the maximum number of associations that can be successfully performed per second. Association and verification failures MUST be reported along with the test results.

If the test is repeated for different numbers of endstations, the results MAY be presented as graphs of endstation association rate versus number of test endstations.



 TOC 

5.2.4.  Endstation capacity



 TOC 

5.2.4.1.  Objective

The 802.11 protocol requires that an infrastructure endstation wishing to communicate must authenticate and associate with (i.e., connect to) a WTP. In order to track and maintain the connection state of each endstation, the WTP and/or AC must maintain a substantial database of endstation states and attributes. Further, this database must be consulted during all endstation state changes (e.g., during roaming); thus the resources consumed by this database places no small overhead on the WLAN infrastructure, and forms an upper limit on the number of concurrently active endstations that can be supported by the WLAN.

The objective of this test is hence to determine the number of endstations that can be supported at one time by the DUT or SUT.



 TOC 

5.2.4.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Endstation association rate, RTS/CTS usage, Security mode, DHCP mode and Number of virtual WLANs.

In addition, the following test parameter MUST be configured to be the same for all trials:

Association Timeout - The tester MUST wait a predetermined amount of time for the DUT or SUT to complete all of the handshakes required for connection setup. If the DUT or SUT fails to complete the connection process within this time, the association attempt MUST be considered to have failed, and the connection attempt MUST be restarted.

Association Retry Limit - The tester MUST limit the number of times that the connection attempts for each endstation are repeated (on failure) before giving up and reporting the endstation as having failed to associate.

The association timeout and association retry limit used by the tester MUST be reported with the test results.

The baseline DUT or SUT configuration for performing this test consists of: association rate of 1 per second, association retry limit of 1, RTS/CTS disabled, security not used, DHCP not used, and a single virtual WLAN.



 TOC 

5.2.4.3.  Procedure

The DUT or SUT is first set up according to the baseline configuration. The tester then presents virtual or physical test endstations to the DUT at the required rate for connection, until the DUT or SUT fails to associate at least 1 endstation even after the required number of retries have been performed.

After the test endstations successfully authenticate and associate with the DUT or SUT, the tester MUST verify that these endstations have indeed been associated by transmitting test data frames to the endstations, and ensure that these data frames are correctly forwarded by the DUT or SUT. The rate at which verification data frames are transmitted to the DUT or SUT MUST be well below the theoretical maximum capacity of the links. The tester MUST ensure that at least one data frame directed to each endstation is forwarded. If multiple virtual WLANs are used, the endstations MUST be distributed uniformly across them.

If the DUT or SUT deauthenticates or disassociates one or more endstations during the data transfer phase, these MUST be counted as association failures. If none of the test data frames transmitted to a endstation are forwarded successfully, this MUST be treated as a verification failure. If failures do occur, the tester MUST decrement the reported endstation capacity by the number of endstations for which verification failures occurred.

After the baseline configuration has been tested, the tester MAY repeat the process with a new configuration, until the desired number of different configurations have been exercised.

After each trial has been completed, the tester MUST remove the test endstation associations from the DUT or SUT database by performing the 802.11 deauthentication procedure for each endstation.



 TOC 

5.2.4.4.  Analysis and reporting

The endstation capacity of the DUT or SUT is computed and reported as the maximum number of endstations that can be successfully associated with the DUT or SUT at the specified association rate. Association and verification failures SHOULD be reported along with the test results.

If the test is repeated for different association rates, the results MAY be presented as graphs of endstation association rate versus endstation capacity.



 TOC 

5.2.5.  WTP capacity



 TOC 

5.2.5.1.  Objective

To determine the number of WTPs that an AC can successfully support at one time. This test is only applicable to ACs.

The CAPWAP protocol enables an AC to discover, initialize, configure, manage and transfer data to/from a number of WTPs. The total number of WTPs that a single AC can support can range into the hundreds, implying that the management load on the AC can be quite substantial, and the ability of the AC to support and maintain the WTPs will play a substantial part in the performance and uptime of the LAN.

This test therefore measures the ability of the DUT or SUT to support and maintain a large number of WTPs, each of which is concurrently supporting a large number of endstations, each of which is sinking and/or sourcing data traffic.

This test may be carried out with actual (physical) WTPs, or by using test equipment capable of emulating WTPs and the wireless endstations behind them. If emulated WTPs are used, they must implement the management protocol (e.g., CAPWAP) employed between the WTPs and the ACs. If actual WTPs are used, emulation is not necessary, but the WTP capacity of the AC must then be determined by manually connecting and disconnecting physical WTPs.



 TOC 

5.2.5.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Number of wireless endstations per WTP, Offered load per endstation, Traffic direction, Security mode, DHCP mode and Number of virtual WLANs.

In addition, the following test parameter MUST be configured to be the same for all trials:

Initialization Timeout - The tester MUST wait a predetermined amount of time for the DUT or SUT to discover the WTPs and complete all of the handshakes required for initialization and setup, including firmware download. The initialization timeout MUST not exceed 900 seconds.

The initialization timeout used by the tester MUST be reported with the test results.

The baseline DUT or SUT configuration for performing this test consists of: a single wireless endstation per WTP, offered load per endstation of 25% of theoretical maximum medium capacity, downstream (Ethernet to wireless) traffic flow, initialization timeout of 600 seconds, security not used, DHCP not used, and a single virtual WLAN.



 TOC 

5.2.5.3.  Procedure

The DUT or SUT is first set up according to the baseline configuration. The tester then presents an initial number of virtual or physical test WTPs to the DUT. The test WTPs SHOULD be presented as nearly simultaneously as possible. If the DUT or SUT fails to initialize the number of test WTPs successfully within the initialization timeout, the tester MUST consider the trial to have failed, and another trial MUST be performed with a smaller number of WTPs. If the DUT or SUT succeeds in initializing all of the test WTPs, the test WTPs should be removed, and the next trial performed with a larger number, until the no more test WTPs can be connected.

Once all of the test WTPs have been successfully connected and initialized, the tester MUST associate the configured number of endstations with each WTP, and then MUST cause traffic to flow to and/or from each endstation between the wireless and Ethernet sides of the DUT. The tester MUST first send learning frames (after endstation connection setup as per the security mode) to allow the DUT or SUT to update its address tables properly.

In tests involving multiple endstations per WTP, the tester MUST ensure a uniform distribution of frames to/from each endstation. In addition, all permissible combinations of source and destination addresses (consistent with the traffic direction setting) MUST be represented equally within each trial, to distribute the load of transmission and reception uniformly.

If the DUT or SUT deauthenticates one or more test endstations, the tester MUST attempt to reauthenticate these endstations. If the tester fails to reauthenticate the endstations, then the trial MUST be considered to have failed, and MUST be repeated with a smaller number of WTPs. If the DUT or SUT disconnects from one or more WTPs, or reboots, the trial MUST be considered to have failed, and MUST be repeated with a smaller number of WTPs. A successful trial is one in which the specified number of test WTPs remain connected to the DUT or SUT and the specified number of test endstations are able to remain associated and successfully pass traffic for the trial duration.

After the baseline configuration has been tested, the tester MAY repeat the process with a new configuration, until the desired number of different configurations have been exercised.

The tester MUST remove the test endstation context and the WTP context from the DUT or SUT database after the completion of each trial, by performing the 802.11 deauthentication procedure for each associated endstation and disconnecting or powering down the WTP after all the associated endstations have been deauthenticated.



 TOC 

5.2.5.4.  Analysis and reporting

The WTP capacity of the DUT or SUT is computed and reported as the maximum number of WTPs that can be simultaneously connected to the DUT, with associated endstations that can successfully exchange data, over the entire trial duration.



 TOC 

5.2.6.  Reset recovery time



 TOC 

5.2.6.1.  Objective

As pointed out in RFC 2544 [RFC2544] (Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” March 1999.) and RFC 1242 [RFC1242] (Bradner, S., “Benchmarking terminology for network interconnection devices,” July 1991.), the rapidity with which a WLAN infrastructure device transitions from a reset state to a fully operational state affects the perceived availability and stability of a wireless network. For example, an excessive time required to recover from a reset can force endstations to begin scanning for other WTPs, cause higher-layer connections to be dropped, and so on.

This test therefore measures the speed with which a DUT or SUT recovers from a device or software reset and resumes forwarding endstation traffic. It is performed on either SUTs comprising WTPs and ACs, or on DUTs comprising ACs only.



 TOC 

5.2.6.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Number of WTPs, Number of wireless endstations per WTP, Offered load per endstation, Traffic direction, Security mode, DHCP mode and Number of virtual WLANs.

In addition, the following test parameter MUST be configured to be the same for all trials:

Reset duration - The reset duration MUST NOT be less than 5 seconds, and SHOULD be greater than 10 seconds. The tester MUST subtract the reset duration from the measured traffic interruption interval. The trial duration MUST be sufficient to cover both the reset duration and the anticipated recovery time of the DUT or SUT.

The reset duration used by the tester MUST be reported with the test results.

The baseline DUT or SUT configuration for performing this test consists of: a single wireless endstation per WTP, offered load per endstation of 25% of theoretical maximum medium capacity, downstream (Ethernet to wireless) traffic flow, reset duration of 10 seconds, security not used, DHCP not used, and a single virtual WLAN.



 TOC 

5.2.6.3.  Procedure

The DUT or SUT is set up according to the baseline configuration. The tester should then associate the virtual or physical endstation(s) with the WTPs, and transmit test data between the wireless and Ethernet sides of the DUT or SUT according to the configured traffic direction for the trial duration.

During the middle of the trial period, the DUT or SUT is reset, and then allowed to recover normally. The reset process of the DUT or SUT MUST NOT be artificially short-circuited in any way (e.g., by providing predefined configurations that are not part of normal operational practice). The tester MUST monitor the data traffic being received by the wireless endstations before, during and after the reset period, and MUST record the timestamps of the last valid data frame received just after the reset is applied and the first valid data frame received just after the reset duration completes and the reset is removed. The reset recovery time is then calculated and reported as below.

A power-interruption reset test MUST be performed. If the DUT or SUT is capable of a software reset and/or a hardware reset, then the test SHOULD be repeated with the software and hardware resets. The results MUST be reported separately.

Devices that are not considered to be part of the DUT or SUT MUST NOT be reset. For example, if the setup comprises the tester, one or more WTPs, and an AC, and only the AC is considered to be part of the DUT or SUT, then the reset (hardware or software) MUST be applied only to the AC.

The tester MUST distinguish between valid test traffic frames and other frames (e.g., corrupted frames, management frames, random data frames) and MUST use only valid test traffic frames in the measurement process.

If the test endstations are disconnected or disassociated from the DUT or SUT during the reset duration or the recovery period, the tester MUST attempt to reconnect the test endstations as soon as beacons are received from the WTPs during the recovery period. Disconnection of one or more test endstations during the reset duration or recovery period MUST be reported with the test results.



 TOC 

5.2.6.4.  Analysis and reporting

The reset recovery time MUST be measured and reported as the time, in seconds, between the last received test data frame just prior to the application of the reset and the first received test data frame just following the removal of the reset.



 TOC 

5.2.7.  Failover recovery time



 TOC 

5.2.7.1.  Objective

To determine the speed with which a DUT or SUT containing redundant modules or devices can restore service after a failure of a module.

The rapidity with which the DUT or SUT is able to restore service following failure of one of its components directly affects the availability and uptime of a wireless network. In critical situations, it is common to provide backup ACs that are capable of taking over when a primary AC fails. Rapid restoration of service is essential to avoid issues such as dropped VoIP calls and lost TCP connections.

This test quantifies the time taken for a WLAN infrastructure DUT or SUT to restore service following a failure of a primary AC. Note that this test is only applicable to DUTs/SUTs that implement redundant ACs or AC modules. Further, this test is only feasible in situations where the primary AC or AC module can be disabled or removed without powering down the DUT or SUT, and while traffic is flowing.



 TOC 

5.2.7.2.  Test parameters

The following parameters MUST be configured prior to each trial as specified in Section 3.5.14:

Number of WTPs, Number of wireless endstations per WTP, Offered load per endstation, Traffic direction, Security mode, DHCP mode and Number of virtual WLANs.

The baseline DUT or SUT configuration for performing this test consists of: one WTP, a single wireless endstation per WTP, offered load per endstation of 25% of theoretical maximum medium capacity, downstream (Ethernet to wireless) traffic flow, security not used, DHCP not used, and a single virtual WLAN.



 TOC 

5.2.7.3.  Procedure

The DUT or SUT is set up according to the baseline configuration. The tester should then associate the virtual or physical endstation(s) with the WTPs, and transmit test data between the wireless and Ethernet sides of the DUT or SUT according to the configured traffic direction for the trial duration.

During the middle of the trial period, the primary AC is disabled in one of the following ways:

Removal from the chassis (if implemented as a hot-swappable module)

Powering it down (if implemented as a separately powered device)

Disconnection from the rest of the DUT or SUT (if disconnection is possible by removing a single link)

Disabling it under software control

The software disable method SHOULD be used only as a last resort, if no other means of disabling the primary AC is found. The method used to disable the primary AC MUST be described along with the test results.

The tester MUST monitor the data traffic being received by the wireless endstations before and after the primary AC is disabled. If data traffic is interrupted during the trial for any reason, the tester MUST record and report the duration of the interruption as the failover recovery time. Note that it is possible for the failover recovery to be fully transparent to the data traffic, in which case the failover recovery time is effectively zero.

The tester MUST distinguish between valid test traffic frames and other frames (e.g., corrupted frames, management frames, random data frames) and MUST use only valid test traffic frames in the measurement process.

If the test endstations are disconnected or disassociated from the DUT or SUT during the failover recovery period, the tester MUST attempt to reconnect the test endstations immediately, and continue to attempt to reconnect the test endstations until successful (or until the trial ends, whichever is sooner). Disconnection of one or more test endstations during the failover recovery period MUST be reported with the test results. Failure to restore traffic to one or more test endstations after the primary AC has been disconnected MUST be reported with the test results.



 TOC 

5.2.7.4.  Analysis and reporting

The failover recovery time MUST be measured and reported as the time, in seconds, during which data traffic is interrupted after the primary AC has been disabled.



 TOC 

6.  Security Considerations

Documents of this type do not directly affect the security of the Internet or of corporate networks as long as benchmarking is not performed on devices or systems connected to operating networks.

Note that performance tests SHOULD be done on with adequate isolation between the DUT or SUT and the remainder of the network, or with security systems enabled, to avoid the possibility of compromising the performance of operating networks in some manner.



 TOC 

7.  IANA Considerations

There are no IANA actions requested in this memo. (Note to RFC Editor: This section may be removed upon publication as a RFC.)



 TOC 

8.  References



 TOC 

8.1. Normative References

[802.11] IEEE, “ANSI/IEEE Std 802.11 "Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," ISO/IEC 8802-11:1999(E), ISBN 0-7381-1658-0,” 1999.
[802.3] IEEE, “ANSI/IEEE Std 802.3, "Part 3: Carrier sense multiple access with collision detection (CSMA/CD) access method and physical layer specifications," ISBN 0-7381-4740-0,” 2005.
[RFC1042] Postel, J. and J. Reynolds, “Standard for the transmission of IP datagrams over IEEE 802 networks,” STD 43, RFC 1042, February 1988 (TXT).
[RFC1112] Deering, S., “Host extensions for IP multicasting,” STD 5, RFC 1112, August 1989 (TXT).
[RFC1242] Bradner, S., “Benchmarking terminology for network interconnection devices,” RFC 1242, July 1991 (TXT).
[RFC2119] Bradner, S., “Key words for use in RFCs to Indicate Requirement Levels,” BCP 14, RFC 2119, March 1997 (TXT, HTML, XML).
[RFC2285] Mandeville, R., “Benchmarking Terminology for LAN Switching Devices,” RFC 2285, February 1998 (TXT, HTML, XML).
[RFC2544] Bradner, S. and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” RFC 2544, March 1999 (TXT).
[RFC2889] Mandeville, R. and J. Perser, “Benchmarking Methodology for LAN Switching Devices,” RFC 2889, August 2000 (TXT).
[RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. Jacobson, “RTP: A Transport Protocol for Real-Time Applications,” STD 64, RFC 3550, July 2003 (TXT, PS, PDF).
[RFC3918] Stopp, D. and B. Hickman, “Methodology for IP Multicast Benchmarking,” RFC 3918, October 2004 (TXT).
[RFC4814] Newman, D. and T. Player, “Hash and Stuffing: Overlooked Factors in Network Device Benchmarking,” RFC 4814, March 2007 (TXT).


 TOC 

8.2. Informative References

[802.11.2] IEEE, “IEEE P802.11.2, "Draft Recommended Practice for the Evaluation of 802.11 Wireless Performance",” 2007.
[G.107] ITU, “ITU-T Recommendation G.107, "The E-model, a computational model for use in transmission planning",” 2003.


 TOC 

Appendix A.  Intended load computations

Calculating intended load for 802.11 media access is complicated by the number of different parameters that need to be accounted for as well as the random effect of backoff and management overhead. This appendix provides formulas for the theoretical maximum capacity of the media, actual intended load, and inter-burst gap.

Note that the instantaneous capacity of the 802.11 medium changes from transmission to transmission due to the effects of random backoff after each transmission. The formulas presented here are therefore expected to be applied over a large volume of traffic, rather than individual frames or bursts of frames. In addition, the parameters used in the formulas change for different 802.11 physical layers and also different data rates used within a particular physical layer.



 TOC 

A.1.  Calculating theoretical maximum media capacity

The theoretical maximum media capacity is calculated assuming constant-size data frames, transmitted with the minimum frame spacing according to the 802.11 protocol, with no collisions or retries occurring.

The following input parameters are defined:

LENGTH - MAC Data frame size in bytes, including FCS. For fragmented transfers, this is the size of each fragment.

SPEED - PHY data rate for the MAC portion of a DATA frame, in bits/second.

PLCPTIME - Time required to transmit the PLCP header for the given 802.11 PHY type and data rate, in seconds.

SLOTTIME - The slot time for the given 802.11 PHY type and data rate, in seconds.

DIFS - The Distributed Interframe Space (see subclause 9.2.10 of IEEE 802.11 [802.11] (IEEE, “ANSI/IEEE Std 802.11 "Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," ISO/IEC 8802-11:1999(E), ISBN 0-7381-1658-0,” 1999.)), in seconds.

SIFS - The Short Interframe Space (see subclause 9.2.10 of IEEE 802.11 [802.11] (IEEE, “ANSI/IEEE Std 802.11 "Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," ISO/IEC 8802-11:1999(E), ISBN 0-7381-1658-0,” 1999.)), in seconds.

CWmin - The minimum contention window duration (see subclause 9.2.4 of IEEE 802.11 [802.11] (IEEE, “ANSI/IEEE Std 802.11 "Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," ISO/IEC 8802-11:1999(E), ISBN 0-7381-1658-0,” 1999.)), in slot times.

The following intermediate values are calculated first:

TXTIME - Time required to transmit a single Data frame or fragment. For transfers that do not involve an RTS/CTS exchange, this is the time taken to transmit the Data frame plus an immediately following ACK frame (see 9.2.8 of IEEE 802.11 [802.11] (IEEE, “ANSI/IEEE Std 802.11 "Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," ISO/IEC 8802-11:1999(E), ISBN 0-7381-1658-0,” 1999.)). For transfers involving an RTS/CTS exchange, this is the time taken to transmit an RTS, CTS, Data and ACK frame.

For RTS/CTS based transfers:

TXTIME = (PLCPTIME * 4) + (SIFS * 3) + (((LENGTH + 48) * 8) / SPEED)

For transfers not involving RTS/CTS:

TXTIME = (PLCPTIME * 2) + SIFS + (((LENGTH + 14) * 8) / SPEED)

AMFI - Average Minimum Frame Interval. This is the minimum legal interval between the start of a Data frame and the start of the immediately following Data frame, averaged over a large number of Data frames.

AMFI = TXTIME + DIFS + ((CWmin * SLOTTIME) / 2)

The theoretical maximum capacity of the medium (abbreviated as CAP), in bits/second, is then given by:

CAP = (LENGTH * 8) / AMFI

The above formula does not take into account overhead due to management frames such as beacons and probe requests/responses. The tester SHOULD separately account for management frame overhead during a trial and subtract this overhead from the calculated theoretical capacity in order compensate for the capacity loss due to these frames.



 TOC 

A.2.  Calculating constant intended load

The calculations in this section deal with a constant (steady) load generated by the tester (i.e., a constant frame pattern). Burst loads are covered in the next section.

If the DUT or SUT is not to be overloaded, the intended unidirectional traffic load can range from 0 to 100% of the theoretical maximum media capacity previously calculated (0 to 50% in the case of bidirectional traffic streams). See Section 3.5.1 of RFC 2285 [RFC2285] (Mandeville, R., “Benchmarking Terminology for LAN Switching Devices,” February 1998.) for a full definition of Iload. For the purposes of this document, the intended load is expressed as a percentage of the theoretical maximum media capacity, and calculated as Iload using the following formula:

Iload = (LOAD / CAP) * 100

where LOAD is the load in bits/second, and CAP is calculated as in Section A.1.

In order to actually generate traffic at Iload values less than 100%, the tester must insert extra spacing between frames to reduce the traffic load. This extra spacing is referred to here as EFG (Excess Frame Gap), and is calculated as follows:

EFG = AMFI * ((100 / Iload) - 1)

The actual frame interval therefore becomes (AMFI + EFG). The traffic pattern generated by the tester hence consists of a Data frame, the corresponding ACK frame (from the DUT), a gap equal to the DIFS plus the average minimum backoff time, and a further gap equal to EFG.

Generating Iload values greater than 100% requires that the tester violate the backoff rules of the 802.11 protocol. The tests in this document do not require Iload values greater than 100%.



 TOC 

A.3.  Calculating burst intended load

This section deals with the computation of intended load when the traffic pattern is bursty. A bursty pattern comprises a series of back-to-back Data/ACK exchanges separated by a DIFS, followed by a gap, followed by another series of back-to-back exchanges, and so on. The gap between bursts (referred to as the IBG) is selected based on the intended load. In addition, the IBG is calculated such that the Iload for bursty and constant traffic are directly comparable. (See Section 3.4.3 of RFC 2285 [RFC2285] (Mandeville, R., “Benchmarking Terminology for LAN Switching Devices,” February 1998.) for a discussion of IBG.)

The following input parameters are defined, in addition to those defined above:

BURST - Length of burst in frames.

For a given Iload, the IBG is calculated as:

IBG = DIFS + (AMFI * BURST * ((100 / Iload) - 1))

Note that the IBG is measured from the last bit of the ACK frame of the last data frame in a burst to the first bit of the preamble of the first data frame in the next burst.



 TOC 

Authors' Addresses

  Tarunesh Ahuja
  Cisco Systems, Inc.
  170 West Tasman Dr.
  San Jose, California 95134
  USA
Phone:  +1 408 853 9252
Email:  tahuja@cisco.com
  
  Tom Alexander
  VeriWave, Inc.
  8770 SW Nimbus Ave,
  Beaverton, Oregon 97008
  USA
Phone:  +1 971 327 7490
Email:  tom@veriwave.com
  
  Scott Bradner
  Harvard University
  29 Oxford St.
  Cambridge, Massachusetts 02138
  USA
Phone:  +1 617 495 3864
Email:  sob@harvard.edu
  
  Sanjay Hooda
  Cisco Systems, Inc.
  170 West Tasman Dr.
  San Jose, California 95134
  USA
Phone:  +1 408 527 6403
Email:  shooda@cisco.com
  
  Jerry Perser
  VeriWave, Inc.
  5743 Corsa Avenue, Suite 224
  Westlake Village, California 91362
  USA
Phone:  +1 818 889 2071
Email:  jperser@veriwave.com
  
  Muninder Sambi
  Cisco Systems, Inc.
  170 West Tasman Dr.
  San Jose, California 95134
  USA
Phone:  +1 408 525 7298
Email:  msambi@cisco.com


 TOC 

Full Copyright Statement

Intellectual Property