Author Archives: ck129@duke.edu

perfSONAR Test Results – 10/28/13 to 10/31/13

Jianan Wang ran a series of perfSONAR runs that included runs that overlapped their timing and runs that were done with no overlap.

The three blade chassis, each containing four blade servers, are connected to three independent OpenFlow switches.

Connections between blades SDN09/SDN10/SDN11/SDN12 to SDN05/SDN06/SDN07/SDN08 traverse two OpenFlow switches

Connections between blades SDN09/SDN10/SDN11/SDN12 to SDN01/SDN02/SDN03/SDN04 traverse three OpenFlow switches

As expected, the runs with no overlap showed approximately 9 Gbps of available bandwidth for traffic that ran through two switches, while the runs that overlapped (across three servers) showed approximately 3 Gbps of available bandwidth

A reduction of 1 Gbps was seen in traffic that flowed across the additional switch for the unblocked test.

SDN09 Rates

As a reminder, here is an overview of the server topology

SDN Topology Image

Here is a shorter time window that shows the variation between blocking and non-blocking perfSONAR tests (note that this is for SDN10 connections to SDN01 and SDN05).

SDN10 Rates

perfSONAR Tests – 9/03/13 – Bandwidth Testing Between Blade Chassis

 
perfSONAR test from SDN-09 to SDN-01, SDN-02, and SDN-12

Jianan Wang ran a series of pefSONAR tests from approximately 12:00 AM on 9/3/13 to 4:00 AM on 9/4/13.

These test were run from SDN-09 (in NB154) to SDN-01 and SDN-02 in NB-145 (DSCR).

The results are shown below:

perfSONAR test from SDN-09 to SDN-01, SDN-02, and SDN-12

As expected, the performance that stayed ‘in chassis’ from SDN-09 to SDN-12 was constant at about 9 GBPS.  The performance achieved for inter chassis communication that is required to go through the core network on the openflow VRF showed significant variability.  Post testing it was determined that testing from machines in the DSCR to the OIT NAS heads in NB154 was responsible for the performance degradation during the day.  Given that SDN-01 and SDN-02 are in the same chassis we are not certain of the cause of the difference in performance between them during the period of midnight to 10 AM, but they appeared to converge to between 8 and 8.5 Gbps after midnight on the 3rd.

Testing Processes

A series of bandwidth tests have been performed using the existing production network in order to understand baseline performance capabilities as well as confirming the impact that large file transfers have on the network.

It was assumed that a 10G connected server would not be able to impact the core 20G Duke network.  However, we have seen multiple impacts on the Duke network due to baseline testing for this project.

perfSONAR tests have been shown to be NON intrusive, but large, dedicated load cannons that do sustained file transfers have highlighted capacity issues in the core Duke academic campus network.

PerfSonar Tests – 9/16/13 – Bandwidth Testing Between Blade Chassis

Below is a summary of three different perfSONAR tests that were run.

Tests were run from blades in three chassis to three blades in each of the chassis.

 

First sets of tests between server chassis to blade SDN-11 – for a picture of the network layout see:

http://sites.duke.edu/dukesdn/files/2013/09/SDN-Network-Layout-Pre-Open-Flow.pdf

perfSONAR bandwidth test to SDN-11

perfSONAR bandwidth test to SDN-11

perfSONAR test to SDN-06

perfSONAR test to SDN-06

Note significantly reduced bandwidth on connection from SDN-11 to SDN-06 – investingation underway to determine root cause.

 

perfSONAR test to SDN-01

perfSONAR test to SDN-01

SDN Network Layout

Below is a depiction of the network prior to the implementation of the Open Flow switches.  Connectivity between NB154 (IGSP gear in North Building Data Center Space) and NB 145 (DSCR) flows through the core network on the openflow VRF.

SDN Network Layout – Pre Open Flow – SDN Network Layout – Pre Open Flow – DF

After the base testing is completed, the network will be updated as shown in the attached:

SDN Network Layout – Post Open Flow