Sunday, 26 July 2015

VNX Systems storage Pool and raid group and their LUNs(THICK and THIN)

VNX Systems storage Pool and raid group and their LUNs(THICK and THIN)



VNX system allow the creation of two types of block storage pool- RAID Group and Pools.

Raid Group is the traditional way to group disks into the sets.Rules regarding the number of disks allowed in a RG, and the minimum/maximum number for a given raid type are enforced by the system. Supported RAID types are RAID 1,1/0,3,5,6.
Note: Only Traditional LUNs can be created on a RG.

Pools are required for FAST VP(auto-tiering), and may have mixed disk types(FlASH, SAS and NL-SAS). The number of disks in a single pool depends on the VNX model, and is the maximum number of disks in the storage system less 4. As an example, the vnx5700 has a maximum capacity of 500 disks and a maximum pool size of 496 disks. The remaining 4 disk are the system disk and can not be a part of pool. only  RAID 5,6,1/0 are supported in pools and the entire pool(all tiers) will be one RAID type.

Traditional LUN are created on RG, They exhibit the highest level of performance of nay LUN type, and are recommended where predictable performance is required. All LUNs in a RG will be of the same RAID type.

Two different type of LUN can be created on the Pools- Thick LUNs and Thin LUNs.
Thick Lun: when thick lun is created, the entire space that will be used for the LUN is reserved, if there is insufficient space in the pool, the thick LUN will not be created. An initial allocation of 3GiB slices will be allocated as needed. these slices contain 1GiB of contiguous Logical Block Addresses(LBAs), so when a slice is written of the first time, it will be allocated to the thick LUN. Because tracking happens at a granularity of 1GiB, the amount of metadata is relatively low, and the lookups that required to find the location of the slice in the pool are fast. Because lookups are required , thick LUN access will be slower than accesses to traditional LUNs.

Thin LUN: Thin LUNs alo allocate 1GiB slices when space is needed , but the granularity inside those slices is at the 8KiB block level. Any 1GiB slice will be allocated to only 1 thin Lun,but the 8KiB blocks will not necessarily be from contiguous LBAs. Oversubscription is allowed, so the total size of the thin LUNs in a pool can exceed the side of the available physical data space. monitoring is required to ensure that out of space condition do not occur. there is appreciably more overhead associated with thin LUNs that with thick LUNs and traditional LUNs, and performance is substantially reduces as result.

Metadata
Pool LUN also have metadata and metadata is associated with the use of both the thick LUN and thin LUN . The metadata is used to locate the data on the private LUNs used in the Pools structure. The amount of metadata depends on the type and size of the LUN. 

for ex: Thin LUN metadata(GB)=LUN capacity*0.02+3GB
so for e.g 500GB thin LUN (when fully allocated )=13 GB metadata.
Thick LUN metadata(GB)=LUN capacity*0.001+3GB
e.g 500GB Thick LUN=3.5GB metadata

so from above example we can see that thin lun required around 10GB more space than the thick LUN.

Positioning Thin LUNs
Thin LUN should be positioned in Block environments where space saving and storage efficiency out weight performance as the main goals. Areas where storage space is traditionally over allocated and where the thin LUN”allocate space on demand” functionality would be an advantage, include user home directories and shared data space.

If FAST VP is a requirement , and Pool LUNs are being proposed for that reason, it is important to remember that thick LUNs achieve better performance than thin LUNs.

Beware  that thin LUNs are not recommended in certain environment which is Exchange 2010 and the file systems on VNX.

Thin LUN Perfomance Implication.
space is assigned to thin LUNs at a granularity of 8KiB(inside 1GiB slice). The implication here is that tracking is required for each 8kib piece of data saved on a thin LUN, and that tracking involves capacity overhead in the form of metadata. In addition , since the location of any 8KiB piece of data cannot be predicted , each data access to a thin LUN requires a lookup to determine the data location. If the metadata is not currently memory-resident, a disk access will be required , and an extended response time will result. This makes Thin LUNs appreciably slow than Traditional LUNs, and slower than Thick LUNs.

Because Thin LUNs make use of this additional metadata, recovery of thin LUNs, after certain types of failure(e.g cache dirty faults) will take appreciably longer than recovery for think LUN or traditional LUNs. A strong recommendation, therefore, is to place mission critical applications on thick LUNs or traditional LUNs.

In some environment- those with a high locality of data reference -FAST cache may help to reduce the performance impact of the metadata lookup.

Thin LUN should not be used for the VNX filesystem. Thin LUN should never be used where high performance is an important goal.
Pool space should be monitored carefully (Thin LUNs allow Pool oversubscription whereas Thick LUNs do not). The system issues an alert when the consumption of any pool reaches a user-selectable limit. By default , this limit is 70% and allows ample time for the user to take any corrective action required.

hope you all enjoyed reading it.