UWMLSC > Beowulf Systems > Nemo
  

Implementing ZFS on Sunfire X4500

The blue text in the following instructions are command line paths and the green text are machine names. These instructions are not intended to replace man pages, they extend that information as a algorithm to be followed on the Nemo Cluster.
  1. Gather together all the required information; login and passwords and the logical XFS arrangement (ie 3*(13+2) +1)
  2. Login to nemo.phys.uwm.edu
  3. There are two different ways to gain root access to the x4500's
    • Through the BMC:
          ssh root@bx4500-X
          -> start SP/console
          Are you sure you want to start /SP/console (y/n)? y
          Serial console started. To stop, type ESC (
          "Hit enter to continue"
          Login to the x4500 using the cluster password and username scheme

    • Through the OS
          ssh root@x4500-X
          Login to the x4500 using the cluster password and username scheme

Working on the x4500's

  1. Find the 2 physical disk that the OS is living on, this can be done with the metastat command.
    Sample output:
          Device Relocation Information:
          Device   Reloc  Device ID
          c5t4d0   Yes    id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBGWT6WH
          c5t0d0   Yes    id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBGX647H
          
    In this example c5t4d0 and c5t0d0 are being used as the OS disks, DO NOT include this in the zpool.
  2. Use the cfgadm command to get a list of all disks; remembering to exculde to two(2) the OS is living on.
    Here is a list, as an example:
          sata0/0::dsk/c0t0d0            disk         connected    configured   ok
          sata0/1::dsk/c0t1d0            disk         connected    configured   ok
          sata0/2::dsk/c0t2d0            disk         connected    configured   ok
          sata0/3::dsk/c0t3d0            disk         connected    configured   ok
          sata0/4::dsk/c0t4d0            disk         connected    configured   ok
          sata0/5::dsk/c0t5d0            disk         connected    configured   ok
          sata0/6::dsk/c0t6d0            disk         connected    configured   ok
          sata0/7::dsk/c0t7d0            disk         connected    configured   ok
          sata1/0::dsk/c1t0d0            disk         connected    configured   ok
          sata1/1::dsk/c1t1d0            disk         connected    configured   ok
          sata1/2::dsk/c1t2d0            disk         connected    configured   ok
          sata1/3::dsk/c1t3d0            disk         connected    configured   ok
          sata1/4::dsk/c1t4d0            disk         connected    configured   ok
          sata1/5::dsk/c1t5d0            disk         connected    configured   ok
          sata1/6::dsk/c1t6d0            disk         connected    configured   ok
          sata1/7::dsk/c1t7d0            disk         connected    configured   ok
          sata2/0::dsk/c4t0d0            disk         connected    configured   ok
          sata2/1::dsk/c4t1d0            disk         connected    configured   ok
          sata2/2::dsk/c4t2d0            disk         connected    configured   ok
          sata2/3::dsk/c4t3d0            disk         connected    configured   ok
          sata2/4::dsk/c4t4d0            disk         connected    configured   ok
          sata2/5::dsk/c4t5d0            disk         connected    configured   ok
          sata2/6::dsk/c4t6d0            disk         connected    configured   ok
          sata2/7::dsk/c4t7d0            disk         connected    configured   ok
          *sata3/0::dsk/c5t0d0            disk         connected    configured   ok -- OS Disk
          sata3/1::dsk/c5t1d0            disk         connected    configured   ok
          sata3/2::dsk/c5t2d0            disk         connected    configured   ok
          sata3/3::dsk/c5t3d0            disk         connected    configured   ok
          *sata3/4::dsk/c5t4d0            disk         connected    configured   ok -- OS Disk
          sata3/5::dsk/c5t5d0            disk         connected    configured   ok
          sata3/6::dsk/c5t6d0            disk         connected    configured   ok
          sata3/7::dsk/c5t7d0            disk         connected    configured   ok
          sata4/0::dsk/c6t0d0            disk         connected    configured   ok
          sata4/1::dsk/c6t1d0            disk         connected    configured   ok
          sata4/2::dsk/c6t2d0            disk         connected    configured   ok
          sata4/3::dsk/c6t3d0            disk         connected    configured   ok
          sata4/4::dsk/c6t4d0            disk         connected    configured   ok
          sata4/5::dsk/c6t5d0            disk         connected    configured   ok
          sata4/6::dsk/c6t6d0            disk         connected    configured   ok
          sata4/7::dsk/c6t7d0            disk         connected    configured   ok
          sata5/0::dsk/c7t0d0            disk         connected    configured   ok
          sata5/1::dsk/c7t1d0            disk         connected    configured   ok
          sata5/2::dsk/c7t2d0            disk         connected    configured   ok
          sata5/3::dsk/c7t3d0            disk         connected    configured   ok
          sata5/4::dsk/c7t4d0            disk         connected    configured   ok
          sata5/5::dsk/c7t5d0            disk         connected    configured   ok
          sata5/6::dsk/c7t6d0            disk         connected    configured   ok
          sata5/7::dsk/c7t7d0            disk         connected    configured   ok
  3. The next step is to create the zpool, this is done using zpool create -f [name of zpool] raidz2
    Here is a sample command set:
       zpool create -f export raidz2 c0t0d0 c1t0d0 c4t0d0 c5t1d0 c6t0d0 c7t0d0 c0t1d0 c1t1d0 c4t1d0 c6t1d0 c7t1d0 c0t2d0 c1t2d0 c4t2d0 c5t2d0
       zpool add export raidz2 c0t3d0 c1t3d0 c4t3d0 c5t3d0 c6t3d0 c7t3d0 c0t4d0 c1t4d0 c4t4d0 c5t5d0 c6t2d0 c7t2d0 c0t5d0 c1t5d0 c4t5d0
       zpool add export raidz2 c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 c7t6d0 c0t7d0 c1t7d0 c4t7d0 c5t7d0 c6t7d0 c6t4d0 c7t4d0 c6t5d0 c7t5d0
       zpool add -f export spare c7t7d0
          
      The notation "(ie 3*(13+2) +1)" is broken down like this:
        In the Zpool, there's 3 "smaller pools" consisting of 15 disks(under raidz2 - two disk are for parity)
        which is the "13 + 2",the "+1" at the end is the spare.

  4. Congratulations, you have just set up a zpool, using ZFS. Now we'll continue configuring ZFS, setting up "directories" which are called ZFS's.
    First, we may want to unmount the zpool: zfs umount [name of zpool]
    The next step is populating the zpool with the following command: zfs create export/gskelton where "gskelton" is the zfs being created.
  5. To export this file system, something like zfs set sharenfs=rw=@192.168.0.0/21,anon=0 export/gskelton will export it no_root_squash.

    **With a 5*(7+2) + 1 zpool configuration:
       NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
       test                   20.3T    394K   20.3T     0%  ONLINE     -
    
       df -k
       test/gskelton        16647321600      57 16647321301     1%    /test/gskelton
    
       df -h
       test/gskelton           16T    57K    16T     1%    /test/gskelton
    **With a 3*(13+2) + 1 zpool configuration:
       NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
       export                 20.4T    374K   20.4T     0%  ONLINE     -
    
       df -k 
       export/gskelton      18606163968      63 18606163654     1%    /export/gskelton
    
       df -h
       export/gskelton         17T    63K    17T     1%    /export/gskelton
    **With a 3*(9+2) + 1*(10+2) +1 zpool configuration
       NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
       export                 20.4T    226K   20.4T     0%  ONLINE     -
    
       df -k
       export/gskelton      17641193472      60 17641193157     1%    /export/gskelton
         
       df -h
       export/gskelton         16T    60K    16T     1%    /export/gskelton
Check this page for dead links, sloppy HTML, or a bad style sheet; or strip it for printing.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.