20.2. Quick Start Guide

There is a start up mechanism that allows FreeBSD to mount ZFS pools during system initialization. To enable it, add this line to /etc/rc.conf:

zfs_enable="YES"

Then start the service:

# service zfs start

The examples in this section assume three SCSI disks with the device names da0, da1, and da2. Users of SATA hardware should instead use ada device names.

20.2.1. Single Disk Pool

To create a simple, non-redundant pool using a single disk device, use zpool create:

# zpool create example /dev/da0

To view the new pool, review the output of df:

# df
Filesystem  1K-blocks    Used    Avail Capacity  Mounted on
/dev/ad0s1a   2026030  235230  1628718    13%    /
devfs               1       1        0   100%    /dev
/dev/ad0s1d  54098308 1032846 48737598     2%    /usr
example      17547136       0 17547136     0%    /example

This output shows that the example pool has been created and mounted. It is now accessible as a file system. Files may be created on it and users can browse it, as seen in the following example:

# cd /example
# ls
# touch testfile
# ls -al
total 4
drwxr-xr-x   2 root  wheel    3 Aug 29 23:15 .
drwxr-xr-x  21 root  wheel  512 Aug 29 23:12 ..
-rw-r--r--   1 root  wheel    0 Aug 29 23:15 testfile

However, this pool is not taking advantage of any ZFS features. To create a dataset on this pool with compression enabled:

# zfs create example/compressed
# zfs set compression=gzip example/compressed

The example/compressed dataset is now a ZFS compressed file system. Try copying some large files to /example/compressed.

Compression can be disabled with:

# zfs set compression=off example/compressed

To unmount a file system, use zfs umount and then verify by using df:

# zfs umount example/compressed
# df
Filesystem  1K-blocks    Used    Avail Capacity  Mounted on
/dev/ad0s1a   2026030  235232  1628716    13%    /
devfs               1       1        0   100%    /dev
/dev/ad0s1d  54098308 1032864 48737580     2%    /usr
example      17547008       0 17547008     0%    /example

To re-mount the file system to make it accessible again, use zfs mount and verify with df:

# zfs mount example/compressed
# df
Filesystem         1K-blocks    Used    Avail Capacity  Mounted on
/dev/ad0s1a          2026030  235234  1628714    13%    /
devfs                      1       1        0   100%    /dev
/dev/ad0s1d         54098308 1032864 48737580     2%    /usr
example             17547008       0 17547008     0%    /example
example/compressed  17547008       0 17547008     0%    /example/compressed

The pool and file system may also be observed by viewing the output from mount:

# mount
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1d on /usr (ufs, local, soft-updates)
example on /example (zfs, local)
example/data on /example/data (zfs, local)
example/compressed on /example/compressed (zfs, local)

ZFS datasets, after creation, may be used like any file systems. However, many other features are available which can be set on a per-dataset basis. In the following example, a new file system, data is created. Important files will be stored here, the file system is set to keep two copies of each data block:

# zfs create example/data
# zfs set copies=2 example/data

It is now possible to see the data and space utilization by issuing df:

# df
Filesystem         1K-blocks    Used    Avail Capacity  Mounted on
/dev/ad0s1a          2026030  235234  1628714    13%    /
devfs                      1       1        0   100%    /dev
/dev/ad0s1d         54098308 1032864 48737580     2%    /usr
example             17547008       0 17547008     0%    /example
example/compressed  17547008       0 17547008     0%    /example/compressed
example/data        17547008       0 17547008     0%    /example/data

Notice that each file system on the pool has the same amount of available space. This is the reason for using df in these examples, to show that the file systems use only the amount of space they need and all draw from the same pool. The ZFS file system does away with concepts such as volumes and partitions, and allows for several file systems to occupy the same pool.

To destroy the file systems and then destroy the pool as they are no longer needed:

# zfs destroy example/compressed
# zfs destroy example/data
# zpool destroy example

20.2.2. RAID-Z

Disks fail. One method of avoiding data loss from disk failure is to implement RAID. ZFS supports this feature in its pool design. RAID-Z pools require three or more disks but yield more usable space than mirrored pools.

To create a RAID-Z pool, use this command, specifying the disks to add to the pool:

# zpool create storage raidz da0 da1 da2

Note:

Sun™ recommends that the number of devices used in a RAID-Z configuration is between three and nine. For environments requiring a single pool consisting of 10 disks or more, consider breaking it up into smaller RAID-Z groups. If only two disks are available and redundancy is a requirement, consider using a ZFS mirror. Refer to zpool(8) for more details.

This command creates the storage zpool. This may be verified using mount(8) and df(1). This command makes a new file system in the pool called home:

# zfs create storage/home

Now compression and keeping extra copies of directories and files can be enabled with these commands:

# zfs set copies=2 storage/home
# zfs set compression=gzip storage/home

To make this the new home directory for users, copy the user data to this directory, and create the appropriate symbolic links:

# cp -rp /home/* /storage/home
# rm -rf /home /usr/home
# ln -s /storage/home /home
# ln -s /storage/home /usr/home

Users now have their data stored on the freshly created /storage/home. Test by adding a new user and logging in as that user.

Try creating a snapshot which can be rolled back later:

# zfs snapshot storage/home@08-30-08

Note that the snapshot option will only capture a real file system, not a home directory or a file. The @ character is a delimiter used between the file system name or the volume name. When a user's home directory is accidentally deleted, restore it with:

# zfs rollback storage/home@08-30-08

To list all available snapshots, run ls in the file system's .zfs/snapshot directory. For example, to see the previously taken snapshot:

# ls /storage/home/.zfs/snapshot

It is possible to write a script to perform regular snapshots on user data. However, over time, snapshots can consume a great deal of disk space. The previous snapshot can be removed using the following command:

# zfs destroy storage/home@08-30-08

After testing, /storage/home can be made the real /home using this command:

# zfs set mountpoint=/home storage/home

Run df and mount to confirm that the system now treats the file system as the real /home:

# mount
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1d on /usr (ufs, local, soft-updates)
storage on /storage (zfs, local)
storage/home on /home (zfs, local)
# df
Filesystem   1K-blocks    Used    Avail Capacity  Mounted on
/dev/ad0s1a    2026030  235240  1628708    13%    /
devfs                1       1        0   100%    /dev
/dev/ad0s1d   54098308 1032826 48737618     2%    /usr
storage       26320512       0 26320512     0%    /storage
storage/home  26320512       0 26320512     0%    /home

This completes the RAID-Z configuration. Daily status updates about the file systems created can be generated as part of the nightly periodic(8) runs:

# echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf

20.2.3. Recovering RAID-Z

Every software RAID has a method of monitoring its state. The status of RAID-Z devices may be viewed with this command:

# zpool status -x

If all pools are Online and everything is normal, the message indicates that:

all pools are healthy

If there is an issue, perhaps a disk is in the Offline state, the pool state will look similar to:

  pool: storage
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	storage     DEGRADED     0     0     0
	  raidz1    DEGRADED     0     0     0
	    da0     ONLINE       0     0     0
	    da1     OFFLINE      0     0     0
	    da2     ONLINE       0     0     0

errors: No known data errors

This indicates that the device was previously taken offline by the administrator with this command:

# zpool offline storage da1

Now the system can be powered down to replace da1. When the system is back online, the failed disk can replaced in the pool:

# zpool replace storage da1

From here, the status may be checked again, this time without -x so that all pools are shown:

# zpool status storage
 pool: storage
 state: ONLINE
 scrub: resilver completed with 0 errors on Sat Aug 30 19:44:11 2008
config:

	NAME        STATE     READ WRITE CKSUM
	storage     ONLINE       0     0     0
	  raidz1    ONLINE       0     0     0
	    da0     ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0

errors: No known data errors

In this example, everything is normal.

20.2.4. Data Verification

ZFS uses checksums to verify the integrity of stored data. These are enabled automatically upon creation of file systems and may be disabled using the following command:

# zfs set checksum=off storage/home

Warning:

Doing so is not recommended! Checksums take very little storage space and provide data integrity. Many ZFS features will not work properly with checksums disabled. There is also no noticeable performance gain from disabling these checksums.

Checksum verification is known as scrubbing. Verify the data integrity of the storage pool, with this command:

# zpool scrub storage

The duration of a scrub depends on the amount of data stored. Larger amounts of data will take considerably longer to verify. Scrubs are very I/O intensive, so much so that only one scrub may be run at a time. After the scrub has completed, the status is updated and may be viewed with status:

# zpool status storage
 pool: storage
 state: ONLINE
 scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013
config:

	NAME        STATE     READ WRITE CKSUM
	storage     ONLINE       0     0     0
	  raidz1    ONLINE       0     0     0
	    da0     ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0

errors: No known data errors

The completion date of the last scrub operation is displayed to help track when another scrub is required. Routine pool scrubs help protect data from silent corruption and ensure the integrity of the pool.

Refer to zfs(8) and zpool(8) for other ZFS options.

All FreeBSD documents are available for download at http://ftp.FreeBSD.org/pub/FreeBSD/doc/

Questions that are not answered by the documentation may be sent to <freebsd-questions@FreeBSD.org>.
Send questions about this document to <freebsd-doc@FreeBSD.org>.