

# c2t0d0 is an existing disk that is not mirrored, by attaching c3t0d0 both disks will become a mirror pair Note: zpool only supports the removal of hot spares and cache disks, for mirrors see attach and detach below # When replacing a disk with a larger one you must enable the "autoexpand" feature to allow you to use the extended space, you must do this before replacing the first disk Note: make sure that you get this right as zpool only supports the removal of hot spares and cache disks, for mirrors see attach and detach below # in the event of a disaster you can re-import a destroyed pool Zpool create data01 raidz2 c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 # you can also create raid pools (raidz/raidz1 - mirror, raidz2 - single parity, raidz3 double partity) Zpool create data 01 mirror c1t0d0 c2t0d0 cache c3t0d0 c3t1d0 Zpool create data01 mirror c1t0d0 c2t0d0 log mirror c3t0d0 c4t0d0 # setting up a log device and mirroring it Zpool create data01 mirror c1t0d0 c2t0d0 spare c3t0d0 Zpool create data01 mirror c1t0d0 c2t0d0 mirror c1t0d1 c2t0d1 # mirror and hot spare disks examples, hot spares are not used by default turn on the "autoreplace" feature for each pool # using a different mountpoint than the default / Zpool create data01 /zfs1/disk01 /zfs1/disk02 # you can persume that I created two files called /zfs1/disk01 and /zfs1/disk02 using mkfile # performing a dry run but don't actual perform the creation (notice the -n) Note: once a pool has been removed the history is gone Note: use this command like you would iostat # Show only errored pools with more verbosity Note: there are a number of properties that you can select, the default is: name, size, used, available, capacity, health, altroot # zdb can view the inner workings of ZFS (zdb has a number of options)

Basically the SLOG is the device and the ZIL is data on the device. If the SLOG exists the ZIL will be moved to it rather than residing on platter disk, everything in the SLOG will always be in system memory. Seperate intent log (SLOG) - a seperate logging devive that caches the synchronous parts of the ZIL before flushing them to the slower disk, it does not cache asynchronous data (asynchronous data is flushed directly to the disk).ZFS intent log (ZIL) - a logging mechanism where all the data to be written is stored, then later flushed as a transactional write, this is similar to a journal filesystem (ext3 or ext4).Where ZFS cache is different it caches both least recently used block (LRU) requests and least frequent used (LFU) block requests, the cache device uses level 2 adaptive read cache (L2ARC). Linux caching mechanism use what is known as least recently used (LRU) algorithms, basically first in first out (FIFO) blocks are moved in and out of cache. Hard drives marked as "hot spare" for ZFS raid, by default hot spares are not used in a disk failure you must turn on the "autoreplace" feature. Raidz3 - minimum of 5 devices (three parity disks), you can suffer a three disk loss Raidz2 - minimum of 4 devices (two parity disks), you can suffer a two disk loss Raidz/raidz1 - minimum of 3 devices (one parity disk), you can suffer a one disk loss # raidz is more like raid3 than raid5 but does use parity to protect from disk failures # the more parity bits the longer it takes to resilver an array, standard mirroring does not have the problem of creating the parity
#OPENZFS SCRUB ALL POOLS PLUS#
# You should keep the raidz array at a low power of two plus partity # parity, and repair or indicate that this block should not be used.
#OPENZFS SCRUB ALL POOLS SOFTWARE#
# non-standard distributed parity-based software raid levels, one common problem called "write-hole" is elimiated because raidz in # zfs the data and stripe are written simultanously, basically is a power failure occurs in the middle of a write then you have the # data plus the parity or you dont, also ZFS supports self-healing if it cannot read a bad block it will reconstruct it using the The absolute path of pre-allocated files/images
