Untitled Document

NFS benchmarks:

Suquential reads and writes

xfs and reiserfs tests using bonnie++
this was done by running bonnie++ from one nemo node to read and write from one nfs node partitioned with reiserfs and one with xfs
bash prompt: ./bonnie++ -s 8192:64K -d /mnt/[uwe_xfs|uwe_reiser]/nfstesting/ -fu root

xfs sequential writes:

1.77008 K/sec

2.75630 K/sec

3. 77540 K/sec

4. 74904 K/sec

xfs sequential reads:

114778 K/sec

108914 K/sec

110942 K/sec

111454 K/sec

reiserfs sequential writes:

73035 K/sec

83889 K/sec

87644 K/sec

85732 K/sec

reiserfs sequential reads:

88706 K/sec

91051 K/sec

90952 K/sec

89047 K/sec

1. nemo node:
/etc/auto.master
/mnt /etc/auto.mnt -t 5 -udp -rsize=32768 -wsize=32768
nfs node:
/export1 192.168.0.0/255.255.248.0(async,rw,no_root_squash)

2. nemo node:
/etc/auto.master
/mnt /etc/auto.mnt -t 5 -udp -rsize=65536 -wsize=65536
nfs node:
/export1 192.168.0.0/255.255.248.0(async,rw,no_root_squash,no_wdelay)

3. nemo node:
/etc/auto.master
/mnt /etc/auto.mnt -t 5 -udp -rsize=32768 -wsize=32768
nfs node:
/export1 192.168.0.0/255.255.248.0(async,rw,no_root_squash,no_wdelay)

4. nemo node:
/etc/auto.master
/mnt /etc/auto.mnt -t 5 -udp -rsize=32768 -wsize=32768
nfs node:
/export1 192.168.0.0/255.255.248.0(sync,rw,no_root_squash,no_wdelay)

From this I have found xfs is faster for reads, reiserfs faster for writes.
Since the data eing used is mostly not edited reading is a more important feature.
also the reiserfs box took much longer to boot, format, and is a resource hog.

 

Simotanous reads and writes

xfs and reiserfs tests using dd and md5sum
this was done by running dd and md5sum from n nemo node to read and write from one nfs node partitioned with reiserfs and one with xfs. The final numbers reflect the speed of this process including nfs locking and waiting for avalible conections.
the bash scripts that I ues are tared in the current directory. Each machine will reade and write both to a common file between all running nemo nodes, and another with an indavidual file for each nemo node running.
 

20 nodes: xfs
16M unique file

read: 0.13 sec

write: 0.97 sec

20 nodes: xfs
32M unique file

0.29 sec

3.36 sec

20 nodes: reiser
16 M unique file

.0.24 sec

7.66 sec

20 nodes: reiser
32M unique file

0.29 sec

6.63 sec

20 nodes: xfs
16M same file

0.37 sec

9.66 sec

20 nodes: xfs
32M same file

0.764 sec

3.35 sec

20 nodes: reiser
16M same file

0.44 sec

2.40 sec

20 nodes: reiser
32M same file

0.29 sec

3.05 sec

100 nodes: xfs
16M unique file
100 nodes: xfs
32M unique file
100 nodes: reiser
16 M unique file
100 nodes: reiser
32M unique file
100 nodes: xfs
16M same file
100 nodes: xfs
32M same file
100 nodes: reiser
16M same file
100 nodes: reiser
32M same file

read: 2.33 sec

write: 6.64 sec

1.11 sec

9.68 sec

1.12 sec

6.38 sec

1.36 sec

9.31 sec

0.11 sec

1.08 sec

0.33 sec

3.71 sec

15.07 sec

157.70 sec

10.16 sec

33149 sec

This test was initally to find which file system is faster, but it shows how flaky the servers are. xfs performed ok, logging into the system was ealy the load due to network and raid handling is very small, at most 30% of one processor at a time. The reiserfs machine was close to 100% cpu load the whole time the test was running. in the case of the 100 node same file tests the box stalled and needed to be restarted.

These numbers do not reflect the highest these system can preform. The test was done in parallel with the bonnie++ testing from above.

 

Besides these test I also found /etc/sysconfig/nfs = [number] would not run until we upgraded to 2.6.17
This number reflects how well the nemo nodes can lock file on the nfs node
ON the reiserfs box the best setting was 512 but the nemo nodes still returnd no file or directory.
On the xfs box the best setting is 256 and even with 300 nodes asking to see one file at the same time no errors occured.
If we add more ram this number will go up and may make the production machines more stable.