Zpool Recovery: Ensure all the LUNs are online. Run the following commands: # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT n_zpool_site_a 99.4G 1.31G 98.1G 1% OFFLINE - n_zpool_site_b 124G 2.28G 122G 1% OFFLINE - Check the individual pool status: # zpool status n_zpool_site_b pool: n_zpool_site_b zpool create tank scsi-35000cca2735cbc38 scsi-35000cca266cc4b3c. single disk stripe pool. zpool create tank scsi-35000cca26c108480. Advanced Configuration. If building a pool with a larger number of disks, you are encouraged to configure them into more than one group and finally construct a stripe pool using these vdevs. If any disk fails, all data is lost! # zpool create tank /dev/md0 # zpool add tank /dev/md1 # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH tank 1.98T 233K 1.98T 0% 1.00x ONLINE # zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0
Zpool add disk to pool
You can dynamically add disk space to a pool by adding a new top-level virtual device. This disk space is immediately available to all datasets in the pool. The virtual device that you add should have the same level of redundancy as the existing virtual device. However, you can change the level of redundancy by using the –f option. Jun 03, 2010 · Howto : Create ZFS Striped Vdev ZPool. A ZFS Striped Vdev pool is very similar to RAID0. You get to keep all of the available storage that your drives offer, but you have no resiliency to hard drive failure. If one drive in a Striped Vdev Zpool fails you will lose all of your data.
Dec 01, 2012 · You have to turn off the bootfs property of the pool before adding the disk and turn it back on after the operation. I'm assuming the existing disk is ada0 and the new disk is ada1 # zpool set bootfs="" tank # zpool add tank /dev/ada1 # zpool set bootfs="whatitwasbefore" tank Add device to zpool (creating the mirror) Since the zpool uses gptids - probably not a bad idea in a setup where device renumbering is a constant threat - use gpart list and zpool status to figure out the correct gptids for the zpool. Since we know the disk layout, in our example the third and the sixth line are the device ids to use:
If the n partitions are not the same size, with the smallest being x, only the first x bytes of each partition is used. One cannot remove a Raid-Z from an active pool without a backup/restore. One cannot add a disk to a Raid-Z without a backup/restore (maybe in the future). The only expansion that can be done is to replace the partitions with larger ones, and once the smallest partition is increased, the available space increases. Unfortunately if you have accidentally added a single drive into your raidz pool at the top-level there is no way to just remove the non redundant disk. Your pool is now dependant on this disk. If you want your pool to be just raidz vdevs, then you will need to backup your data, destroy your pool, create a new pool, and restore your data.
Identify the drives you want to use for the ZFS pool. The ZFS on Linux project advices not to use plain /dev/sdx (/dev/sda, etc.) devices but to use /dev/disk/by-id/ or /dev/disk/by-path device names. Device names for storage devices are not fixed, so /dev/sdx devices may not always point to the same disk device.