We used the following equipment to perform these tests:
The test hosts were set up so that we could manage them via one interface while performing our test over the other interface (note, the two interfaces were on different ethernet-segments/networks). The test was done using only two of the eight ports on each switch. We fired data full duplex across the switch and compared performance with that of a crossover cable. We used netperf version 2.4.1, found at the netperf home page for our testing.
Our test tried the following MTU values, 1500 1750 2000 2250 2500 3000 3250 3500 3750 4000 4100 4200 4300 4400 4500 4600 4700 4800 4900 5000 5100 5200 5300 5400 5500 5600 5700 5800 5900 6000 6100 6200 6300 6400 6500 6600 6700 6800 6900 7000 7250 7500 7750 8000 8250 8500 8750 9000. We perform three tests per MTU, using the following commandline netperf -c -C -f K -l 30 -H 192.168.8.1 ("-c" show local CPU, "-C" show remote CPU, "-f K" report in KB/s, "-l 30" = 30 sec. test). netperf only tests in one direction, so we ran it on both machines, pointing the machines at each other to simultaneously stress both transmit and receive on each machine; testing in full duplex.
Our results below show the average of each of the three tests, in MB/s.
As a baseline, we attached two machines via a crossover cable for our initial test resulting in the black curve below. This showed us the maximum expected throughput.
We then attached the two machines directly to the 8508T 721.0154 and ended up with the red curve. Note that the red and black (crossover attached) curves are nearly identical, which shows that the 721.0154 can provide wire-speed throughput.
We then ran the same test on the SMC 8508T 721.8129 model and ended up with the green curve. This shows a throughput loss of about 50% at MTUs above 4500.
The conclusion is that the SMC 8508T model 721.0154 provides the advertised wire speed nonblocking performance on jumbo frames up to 9kB. However the SMC 8508T model 721.8129 does NOT provide such performance. It begins to block when the MTU exceeds about 4500 bytes. This results in about a factor of two drop in performance compared with the advertised speed. One possible explanation is lack of buffer memory for the 'store and forward' architecture but this is only speculation.
parmor AT gravity DOT phys DOT uwm DOT edu -- +1 414-229-2677 (US Central time)
ballen AT gravity DOT phys DOT uwm DOT edu -- +1 414-229-6439 (US Central time)