Availability. Since Sol 10 Update 2, 6/6. The very best software error checking by checksumming every file + dir + pointers thereto. Capacity limits are just not an issue they are so high. 128 bit. If have mirror (or triple) it can error-correct itself. Some useful sample commands http://blogs.oracle.com/migi/entry/playing_with_zfs_filesystem TIP!!! Don't use the top-level zpool data set. Use it just as a container. Create data sets to use with: zfs create -o mountpoint=/mnt/pt -o quote=XG / like: zfs create -o mountpoint=/mnt/z2 -o quota=10M blainesPool/sub1Pool Big problems if hardware RAID underneath. Non-RAID controller = HBA (Host Bus Adapter) "scrub" instead of "fsck". Scrub can check live FSs and checks ALL data. Recommended 1 time/week for cheap disks, 1/month for Enterprise disks. "zpool scrub [-s] pool..." Only changed data is ever written to disk, allowing rollback and snapshots. Data is never physically changed, but the new copy is written then old is deleted. zpool = virtual storage pool made up of vdevs. Just like a VG. vdev = virtual device Constructed of block devices: files, hard drive partitions, entire drives. Recommend entire drives. These components may and should be configured raid-like (RAID-Z*) Single vdev loss kills whole zpool, so vdev redundancy is critical. Can replace a drive in a vdev but vdev disk count is permanent. vdev capacity is minimum member capacity. Support hot spares. Supports a few systems of hardware caching. SNAPSHOTS A snapshot can not be accessed independently. Named like dataSet@somethingUnique zfs snapshot nameofthepool/somename@SOMENAMELIKEDATE Clone = writable snapshot. All clones (non-first copies) store only differences from the source. zfs clone nameofthesnapshotfromZFSlist nameofthepool/something Deduplication is like zip compression, redirecting for redundancies, but at a much larger scale, like files. Troublesome. Encryption supported (latelly). Supports transparent FS compression using LZJB and gzip. Supports quotas. Gains full benefit from disk hardware write cache if whole disks used. UPCOMING: "Block Pointer rewrite" neeeded for dynamic resizing/defrag, etc. Single-disk zfs can get redundancy with 'copies=2' or 'copies=3', but of course this does not protect from disk failure. Long resilver time for ZFS raid. Solaris 11 has proprietized some of ZFS which makes compatibility problems. A work-around is to use Solaris 11 Express from OpenSolaris. Generally available user version is through "FUSE" which runs filesystems in user space. Other ports are having licensing and other difficulties. Re. FUSE: "has limitations and will never be as good – in terms of features, scalability, performance – as a kernel based one." Linux HOWTO docs very limited. 07/2010 first half is building it: http://www.andrewmkane.com/blog/2010/07/10/how-to-run-zfs-on-linux-via-fuse/ RAIDZ* means place X chunks of parity data one each disk. RAIDZ = RAIDZ1 I think! (i.e. default) The number is the number of disks that can fail. I.e. RAIDZ1 is protected from failure of 1 disk. Data Set == readable Component { FS volume (raw or block device) snapshot } Could be the top-level zpool, or a component in a pool "container" Pool names may not contain "/", but default mount points are / + name, so you must use -m switch to create with a different mount point. Hm. Slashes may be allowed on Solaris, but not on OpenSUSE/FUSE. By default data set FS name "x/y" will be mounted as "/x/y". Nice cheat sheet, but has at least 1 mistake: http://lildude.co.uk/zfs-cheatsheet USAGE On Linux, /etc/init.d/zfs-fuse must be running. zfs list zpool list zpool status [-v] zpool scrub [-s] pool... zpool replace pool disk_or_file [disk_or_file] zfs mount -a zfs set mountpoint=legacy # Mounting managed by UNIX utilities zfs set mountpoint=none # Not mounted zpool create [-m /abs/mount/point] NEW_POOL_NAME vdev... Where each vdev is one of disk (like a /dev/dsk/X node) /absolute/file/path (only for experimentation) mirror disk_or_file disk_or_file... raidz* disk_or_file disk_or_file... (spare, log, cache?) zpool destroy pool I'm not certain, but I think that concatenation-without striping is not supported by zfs.