memo:test

文書の過去の版を表示しています。


27. ZFS Primer

ZFSは先進的で現代的なファイルシステムで、特に伝統的なUNIXファイルシステムでは利用できなかった機能の提供を目的に設計されました。 オリジナルはSunでソースのオープン化の意向とともに開発されたため、他のオペレーティングシステムへ移植することが可能でした。 OracleによるSunの買収後、ZFSのオリジナル開発者の何人かが、オープンソースバージョンの継続的で共同開発を提供するためにOpenZFSを創設しました。

ここからZFSの特徴の概要を記します:

ZFSはトランザクションを持つCopy-On-Write (COW)なファイルシステムです。 各書き込み要求について、関係するディスクブロックのコピーが作られ、そして全ての変更は元のブロックではなくコピーに対して作用します。 書き込みが完了すると、全てのブロックポインタの指す先がその新しいコピーに変更されます。 これは、ZFSは常に空き領域に書き込み、ほとんどの書き込みはシーケンシャルで、そして古いバージョンのファイルは新バージョンの書き込みが完全に成功するまで分離されないことを意味します。 ZFSはディスクへの直接アクセスを行い、複数の読み込み/書き込み要求をトランザクションにまとめます。 ほとんどのファイルシステムはこれを行わず、ただディスクブロックへアクセスするのみです。 完了か失敗どちらか一方に定まるトランザクションは、ライトホールは決して存在せず、そしてファイルシステムの確認ユーティリティは必要ないことを意味します。 このトランザクションベースの設計により、追加のストレージ領域が足されれば、それはすぐさま書き込み用に利用可能となります。 データの再均衡化のために、データをコピーして既存のデータを全ての利用可能なディスクに横断的に上書きすることができます。 128ビットファイルシステムなので、ファイルシステムとファイルの最大サイズは16エクサバイトです。

ZFSは自己修復ファイルシステムとして設計されました。 ZFSはデータを書くたびに、対象のディスクブロックごとにチェックサムを生成し記録します。 ZFSはデータを読むたびに、対象のディスクブロックごとのチェックサムを読み込み検証します。 メディアのエラーや“ビット腐敗”はデータに変化を引き起こし、そのチェックサムが一致することはありません。 ZFSがミラーないしRAIDZ構成のプール上にディスクブロックのチェックサムエラーを検出した場合、ZFSは壊れたデータを正しいデータで置き換えます。 滅多に読まれないディスクブロックもあることから、定期的なscrubをスケジューリングし、ZFSが全データブロックを読んでチェックサムの検証と異常ブロックの訂正が行えるようにすると良いでしょう。 冗長性とデータ訂正の提供のために複数のディスクが要求される一方で、1ディスクのシステムにおいてもZFSはデータ異常検出を提供します。 FreeNAS®は各ZFSプールに対し月1回のscrubを自動的にスケジュールし、その結果はPoolsを選択し (Settings)→Statusとクリックすると表示されます。 scrub結果をチェックすることは、ディスクの潜在的な問題の早期発見につながる可能性があります。

伝統的なUNIXファイルシステムと違い、ファイルシステム作成時にパーティションサイズを決める必要はありません。 代わりに、vdevと呼ばれるディスクのグループがZFSプールに組み込まれます。 ファイルシステムは必要に応じて、そのプールから生成されます。 より多くの容量が必要になった時は、同一のvdevをプールにストライピングすることができます。 FreeNAS®において、Poolsはプールの作成と拡張に使われます。 プール

Unlike traditional UNIX filesystems, it is not necessary to define partition sizes when filesystems are created. Instead, a group of disks, known as a vdev, are built into a ZFS pool. Filesystems are created from the pool as needed. As more capacity is needed, identical vdevs can be striped into the pool. In FreeNAS®, Pools is used to create or extend pools. After a pool is created, it can be divided into dynamically-sized datasets or fixed-size zvols as needed. Datasets can be used to optimize storage for the type of data being stored as permissions and properties such as quotas and compression can be set on a per-dataset level. A zvol is essentially a raw, virtual block device which can be used for applications that need raw-device semantics such as iSCSI device extents.

ZFS supports real-time data compression. Compression happens when a block is written to disk, but only if the written data will benefit from compression. When a compressed block is accessed, it is automatically decompressed. Since compression happens at the block level, not the file level, it is transparent to any applications accessing the compressed data. ZFS pools created on FreeNAS® version 9.2.1 or later use the recommended LZ4 compression algorithm.

ZFS provides low-cost, instantaneous snapshots of the specified pool, dataset, or zvol. Due to COW, snapshots initially take no additional space. The size of a snapshot increases over time as changes to the files in the snapshot are written to disk. Snapshots can be used to provide a copy of data at the point in time the snapshot was created. When a file is deleted, its disk blocks are added to the free list; however, the blocks for that file in any existing snapshots are not added to the free list until all referencing snapshots are removed. This makes snapshots a clever way to keep a history of files, useful for recovering an older copy of a file or a deleted file. For this reason, many administrators take snapshots often, store them for a period of time, and store them on another system. Such a strategy allows the administrator to roll the system back to a specific time. If there is a catastrophic loss, an off-site snapshot can restore the system up to the last snapshot interval, within 15 minutes of the data loss, for example. Snapshots are stored locally but can also be replicated to a remote ZFS pool. During replication, ZFS does not do a byte-for-byte copy but instead converts a snapshot into a stream of data. This design means that the ZFS pool on the receiving end does not need to be identical and can use a different RAIDZ level, pool size, or compression settings.

ZFS boot environments provide a method for recovering from a failed upgrade. In FreeNAS®, a snapshot of the dataset the operating system resides on is automatically taken before an upgrade or a system update. This saved boot environment is automatically added to the GRUB boot loader. Should the upgrade or configuration change fail, simply reboot and select the previous boot environment from the boot menu. Users can also create their own boot environments in System ➞ Boot as needed, for example before making configuration changes. This way, the system can be rebooted into a snapshot of the system that did not include the new configuration changes.

ZFS provides a write cache in RAM as well as a ZFS Intent Log (ZIL). The ZIL is a storage area that temporarily holds *synchronous* writes until they are written to the ZFS pool. Adding a fast (low-latency), power-protected SSD as a SLOG (Separate Log) device permits much higher performance. This is a necessity for NFS over ESXi, and highly recommended for database servers or other applications that depend on synchronous writes. More detail on SLOG benefits and usage is available in these blog and forum posts:

  The ZFS ZIL and SLOG Demystified
  Some insights into SLOG/ZIL with ZFS on FreeNAS®
  ZFS Intent Log

Synchronous writes are relatively rare with SMB, AFP, and iSCSI, and adding a SLOG to improve performance of these protocols only makes sense in special cases. The zilstat utility can be run from Shell to determine if the system will benefit from a SLOG. See this website for usage information.

ZFS currently uses 16 GiB of space for SLOG. Larger SSDs can be installed, but the extra space will not be used. SLOG devices cannot be shared between pools. Each pool requires a separate SLOG device. Bandwidth and throughput limitations require that a SLOG device must only be used for this single purpose. Do not attempt to add other caching functions on the same SSD, or performance will suffer.

In mission-critical systems, a mirrored SLOG device is highly recommended. Mirrored SLOG devices are required for ZFS pools at ZFS version 19 or earlier. The ZFS pool version is checked from the Shell with zpool get version poolname. A version value of - means the ZFS pool is version 5000 (also known as Feature Flags) or later.

ZFS provides a read cache in RAM, known as the ARC, which reduces read latency. FreeNAS® adds ARC stats to top(1) and includes the arc_summary.py and arcstat.py tools for monitoring the efficiency of the ARC. If an SSD is dedicated as a cache device, it is known as an L2ARC. Additional read data is cached here, which can increase random read performance. L2ARC does not reduce the need for sufficient RAM. In fact, L2ARC needs RAM to function. If there is not enough RAM for a adequately-sized ARC, adding an L2ARC will not increase performance. Performance actually decreases in most cases, potentially causing system instability. RAM is always faster than disks, so always add as much RAM as possible before considering whether the system can benefit from an L2ARC device.

When applications perform large amounts of random reads on a dataset small enough to fit into L2ARC, read performance can be increased by adding a dedicated cache device. SSD cache devices only help if the active data is larger than system RAM but small enough that a significant percentage fits on the SSD. As a general rule, L2ARC should not be added to a system with less than 32 GiB of RAM, and the size of an L2ARC should not exceed ten times the amount of RAM. In some cases, it may be more efficient to have two separate pools: one on SSDs for active data, and another on hard drives for rarely used content. After adding an L2ARC device, monitor its effectiveness using tools such as arcstat. To increase the size of an existing L2ARC, stripe another cache device with it. The web interface will always stripe L2ARC, not mirror it, as the contents of L2ARC are recreated at boot. Failure of an individual SSD from an L2ARC pool will not affect the integrity of the pool, but may have an impact on read performance, depending on the workload and the ratio of dataset size to cache size. Note that dedicated L2ARC devices cannot be shared between ZFS pools.

ZFS was designed to provide redundancy while addressing some of the inherent limitations of hardware RAID such as the write-hole and corrupt data written over time before the hardware controller provides an alert. ZFS provides three levels of redundancy, known as RAIDZ, where the number after the RAIDZ indicates how many disks per vdev can be lost without losing data. ZFS also supports mirrors, with no restrictions on the number of disks in the mirror. ZFS was designed for commodity disks so no RAID controller is needed. While ZFS can also be used with a RAID controller, it is recommended that the controller be put into JBOD mode so that ZFS has full control of the disks.

When determining the type of ZFS redundancy to use, consider whether the goal is to maximize disk space or performance:

  RAIDZ1 maximizes disk space and generally performs well when data is written and read in large chunks (128K or more).
  RAIDZ2 offers better data availability and significantly better mean time to data loss (MTTDL) than RAIDZ1.
  A mirror consumes more disk space but generally performs better with small random reads. For better performance, a mirror is strongly favored over any RAIDZ, particularly for large, uncacheable, random read loads.
  Using more than 12 disks per vdev is not recommended. The recommended number of disks per vdev is between 3 and 9. With more disks, use multiple vdevs.
  Some older ZFS documentation recommends that a certain number of disks is needed for each type of RAIDZ in order to achieve optimal performance. On systems using LZ4 compression, which is the default for FreeNAS® 9.2.1 and higher, this is no longer true. See ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ for details.

These resources can also help determine the RAID configuration best suited to the specific storage requirements:

  Getting the Most out of ZFS Pools
  A Closer Look at ZFS, Vdevs and Performance

Warning

RAID AND DISK REDUNDANCY ARE NOT A SUBSTITUTE FOR A RELIABLE BACKUP STRATEGY. BAD THINGS HAPPEN AND A GOOD BACKUP STRATEGY IS STILL REQUIRED TO PROTECT VALUABLE DATA. See Periodic Snapshot Tasks and Replication Tasks to use replicated ZFS snapshots as part of a backup strategy.

ZFS manages devices. When an individual drive in a mirror or RAIDZ fails and is replaced by the user, ZFS adds the replacement device to the vdev and copies redundant data to it in a process called resilvering. Hardware RAID controllers usually have no way of knowing which blocks were in use and must copy every block to the new device. ZFS only copies blocks that are in use, reducing the time it takes to rebuild the vdev. Resilvering is also interruptable. After an interruption, resilvering resumes where it left off rather than starting from the beginning.

While ZFS provides many benefits, there are some caveats:

  At 90% capacity, ZFS switches from performance- to space-based optimization, which has massive performance implications. For maximum write performance and to prevent problems with drive replacement, add more capacity before a pool reaches 80%.
  When considering the number of disks to use per vdev, consider the size of the disks and the amount of time required for resilvering, which is the process of rebuilding the vdev. The larger the size of the vdev, the longer the resilvering time. When replacing a disk in a RAIDZ, it is possible that another disk will fail before the resilvering process completes. If the number of failed disks exceeds the number allowed per vdev for the type of RAIDZ, the data in the pool will be lost. For this reason, RAIDZ1 is not recommended for drives over 1 TiB in size.
  Using drives of equal sizes is recommended when creating a vdev. While ZFS can create a vdev using disks of differing sizes, its capacity will be limited by the size of the smallest disk.

For those new to ZFS, the Wikipedia entry on ZFS provides an excellent starting point to learn more about its features. These resources are also useful for reference:

  FreeBSD ZFS Tuning Guide
  ZFS Administration Guide
  Becoming a ZFS Ninja Part 1 (video) and Becoming a ZFS Ninja Part 2 (video)
  The Z File System (ZFS)
  ZFS: The Last Word in File Systems - Part 1 (video)
  The Zettabyte Filesystem

27.1. ZFS Feature Flags

To differentiate itself from Oracle ZFS version numbers, OpenZFS uses feature flags. Feature flags are used to tag features with unique names to provide portability between OpenZFS implementations running on different platforms, as long as all of the feature flags enabled on the ZFS pool are supported by both platforms. FreeNAS® uses OpenZFS and each new version of FreeNAS® keeps up-to-date with the latest feature flags and OpenZFS bug fixes.

See zpool-features(7) for a complete listing of all OpenZFS feature flags available on FreeBSD.

  • memo/test.1606791395.txt.gz
  • 最終更新: 2020-12-01 11:56
  • by Decomo