SolarisZfs: Difference between revisions

From MDWiki
Jump to navigationJump to search
mNo edit summary
No edit summary
Line 6: Line 6:


While an empty filesystem does not use any storage, the collective filesystems may exhaust the storage pool. It is important to plan for actual usage by allocating filesystems to appropriate storage pools and by setting filesystem size limits and possibly reservations.
While an empty filesystem does not use any storage, the collective filesystems may exhaust the storage pool. It is important to plan for actual usage by allocating filesystems to appropriate storage pools and by setting filesystem size limits and possibly reservations.
For ZFS commands to make snapshots see here [[Image:Zfs_snapshots.pdf]].


== Configuration ==
== Configuration ==

Revision as of 23:51, 2 September 2009

ZFS is an intelligent filesystem with in-built virtual disk management and data protection.

ZFS can manage a variety of storage resources (whole disks, partitions, files). Unlike traditional approaches where each filesystem is a static allocation of physical storage, ZFS filesystem storage is allocated as it is used. ZFS manages usage quotas as filesystem size limit. Thus ZFS favours the creation of many individual filesystems.

While an empty filesystem does not use any storage, the collective filesystems may exhaust the storage pool. It is important to plan for actual usage by allocating filesystems to appropriate storage pools and by setting filesystem size limits and possibly reservations.

For ZFS commands to make snapshots see here File:Zfs snapshots.pdf.

Configuration

The configuration employed here, limits the number of disks in each storage pool so that, in the event of x4500 hardware failure, it remains feasible to install each storage pool into alternate hardware.

The disks are grouped into raid sets for protection from disk failure. For maximum performance, each raid set is made of one disk from each of the 6 disk controllers. Raidz2 provides protection from two simultaneous disk failures. Each raid set has access to a hot spare to automatically rebuild after a disk failure.

The storage pools are configured using two raid sets. This is a balance between total number of disks in the pool (for alternate installation) and fragmentation of storage allocation. Hot spare drives are attached to the storage pool and accessable to the raid sets within that pool.

Individual filesystems are created within a particular storage pool.

Current Setup

The three pools melon1,2,3 are each 8.1 TB of usable space and each has two hot spare disks. In addition, there are three unallocated disks out of the total 48.

Commands

To create, configure and examine storage pools, use the command.

  zpool
  typical
      zpool status

To create, configure and examine zfs filesystems, use the command

  zfs
  typical
      zfs create melon1/proj1
      zfs set sharenfs=on     melon1/proj1
      zfs set sharesmb=on     melon1/proj1
      #  Set max (recommended) and min(not recommended) sizes
      zfs set quota=100G      melon1/proj1
      #zfs set reservation=5G melon1/proj1

Additional Configurational information

See melon:/root/zfs_**.sh