Cisco ATM Performance Comparison

AIP vs VIP2/PA-A1

Test Results

M. Foster / H. LaMaster

NREN Engineering

1.0 Introduction

The Virtual Clinic demonstration application has an end-point (UC Santa Cruz) that consists of a Cisco 7500 with an RSP2 router board and an AIP ATM interface card. Preliminary measurements suggested this combination of components can only process 4-20 Mbps of multicast traffic. To more completely assess this limitation, controlled testing was done in the NREN Lab. This report details our findings.

The goal of this testing was to measure NREN router configurations for high performance multicast service when using an ATM Interface Processor (AIP), and compare those measurements with similar tests using an ATM VIP2 Port Adapter (PA-A1). The tests focused on the following areas:

• Unicast throughput

• Native multicast throughput

• Multicast tunnel (GRE) throughput

2.0 Test Configurations

These tests were run using existing Virtual Clinic routers at Ames (one at the NAS, one in the NREN Lab). An existing 7500 has an RSP4 and a VIP2/40 with a PA-A1 ATM port adapter. This setup was used to make baseline measurements. The AIP was installed in the 7500 and additional measurements were taken. An SGI workstation that is already part of the Virtual Clinic testbed (xena) was used to generate test traffic; a FreeBSD PC in the Lab (lemur) was used as the receiver. Figure 1. diagrams the configuration.

The throughput tests were run using ttcp and the Virtual Clinic demonstration application. Unicast performance was measured in both directions, and the Virtual Clinic demonstration program was used to produce varying amounts of multicast traffic; the vat tool was used as a sink for the multicast traffic.

 

3.0 Test Results

Table 1. summarizes the observations we made during the testing process. While the unicast throughput measurements were made in each direction, the values were nominally symmetric, so no distinction is made. The multicast tests were done using native (PIM) multicast routing and a multicast GRE tunnel between vdoc-antl and nren-antl (to simulate one of the possible conditions of the Santa Cruz configuration). In the configuration diagram, the arrows indicate the path and direction of the multicast flow. We did baseline testing using IOS 11.1(24.2) and an experimental version of IOS 11.1(25.1). The primary difference between these versions is the addition of Multicast Distributed Switching (MDS) for ATM subinterfaces: 11.1(25.1) supports it. With MDS, multicast forwarding was handled entirely by the incoming VIP2/40, leaving the RSP cpu essentially idle.

During testing, we found a potential bottleneck in the configuration: the nren-antl 7500 has a VIP2/20 for its PA-A1 interface; this appeared to prove a particularly limiting condition for multicast (GRE) tunnels. Although during baseline measurements, we saw peak throughput of 40 Mbps, this wasn’t sustainable for any length of time. A sustainable rate of 32-35 Mbps was observed. It appeared significant packet discard was happening on the (inbound) VIP. Further tests are warranted to more fully resolve this issue; installing the AIP in nren-antl and repeating the tests should be sufficient. An alternative would be to install a VIP2/40 in place of the nren-antl VIP2/20.

An additional motivation for further testing is to more closely duplicate the UCSC configuration. The Cisco multicast architecture puts the burden for multicast forwarding on the board/adapter that which initially receives an incoming packet (or, on the RSP). Since the incoming interface at UCSC is an AIP, and in the test configuration, the AIP was an outgoing interface, testing with a different router setup or reversal of the direction of multicast flow is warranted.

Table 1. Results Summary

Unicast

8k & 65k bufsize

Native multicast

GRE tunnel

PA-A1

IOS 11.1(24.2)

55-58 Mbps

45 Mbps – 0% loss

50 Mbps – 0% loss

52+ Mbps – 5%loss

55 Mbps source peak thruput at 40 Mbps*, vdoc-antl cpu 94%, nren-antl cpu 36%

PA-A1

IOS 11.1(25.1)

55 Mbps

45 Mbps – 0-2% loss

50 Mbps – 2-3% loss

52 Mbps – 5+% loss

55 Mbps source peak thruput at 40 Mbps*, vdoc-antl cpu 0% (MDS), nren-antl cpu 36%

AIP

IOS 11.1(25.1)

55 Mbps

45 Mbps – 0% loss

50 Mbps – 0% loss

52 Mbps – 5+% loss

52 Mbps source maxes at 32-35 Mbps on receiver; vdoc-antl cpu 0%, nren-antl cpu 20-36%

*See text for more explanation about VIP2/20; these peak throughput numbers are suspect.

4.0 Conclusion

Our initial conclusion is that the AIP interface poses no clear performance bottleneck for the Virtual Clinic multicast application when used in a 7500 with an RSP4. In fact, if a PA-A1 port adapter is used on a VIP2/20, there may be lower performance than an AIP used with an RSP4 (some informal tests we did were supportive of this assertion). Also, while one might expect higher throughput for both unicast and multicast traffic on this testbed, we did not attempt to extract the maximum performance by careful end host buffer tuning. Previous informal measurements have indicated 60-80 Mbps multicast forwarding is possible with an RSP4 performing the forwarding. Our primary goal for this testing was to derive relative performance between the two interfaces and thereby assess the suitability of the AIP for the Virtual Clinic demonstration.