TYPES OF VMWARE
DATASTORES
An introduction to storage in virtual
infrastructure
Vmware ESX supports
three type of storage configuration when connecting to the shared storage
array:
VMFS: virtual
machine files system Datastore.
NAS: Network
attached storage Datastore.
RDM: Raw
device mapping Datastore.
The shared
storage is required for the HA (high –availability), DRS (distributed resource
scheduler), Vmotion and fault tolerance.
The 80/20 rule:
This rule is
well known rule when we design virtual data center. This 80/20 rule means that
the 80% of all system virtualized are of consolidation efforts. The remaining
20% of the system are classified as the business critical application. Although
these applications can be virtualized successfully, they tend to be deployed on
shared storage pools but in what we refer to as isolated dataset.
THE CHARACTERISTICS OF CONSOLIDATION
DATASETS
Consolidation datasets have the following
characteristics:
• The VMs do
not require application-specific backup and restore agents.
• The
dataset is the largest in terms of the number of VMs and potentially the total
amount of storage
-addressed.
•
Individually, each VM might not address a large dataset or have demanding IOP
requirements; --however, the collective whole might be considerable.
• These
datasets are ideally served by large, shared, policy-driven storage pools (or
datastores).
THE CHARACTERISTICS OF ISOLATED DATASETS
(FOR BUSINESS-CRITICAL APPLICATIONS)
Isolated datasets have the following
characteristics:
• The VMs
require application-specific backup and restore agents.
• Each
individual VM might address a large amount of storage and/or have high I/O
requirements.
• Storage
design and planning apply in the same way as with physical servers.
• These
datasets are ideally served by individual, high-performing, nonshared
datastores.
Consolidated
datasets work well with Network File System (NFS) datastores because this
design provides greater flexibility in terms of capacity than SAN datastores
when managing hundreds or thousands of VMs. Isolated datasets run well on all
storage protocols; however, some tools or applications might have restrictions
around compatibility with NFS and/or VMFS.
Unless your
data center is globally unique, the evolution of your data center from physical
to virtual will follow the 80/20 rule. In addition, the native multiprotocol
capabilities of NetApp and VMware will allow you to virtualize more systems
more quickly and easily than you could with a traditional storage array platform.
VMFS DATASTORES
The Vmware VMFS is high-performance clustered file system
that provides datastores, which are shared storage pools. VMFS Datastore can be
configured with logical unit numbers (LUN) accessed by FC, iSCSI and FCoE. VMFS
allows traditionally LUN accessed simultaneously by every ESX server in
cluster.
Applications
traditionally require storage considerations to make sure their performance can
be virtualized and served by VMFS. With these types of deployments, NetApp
recommends deploying the virtual disks on a datastore that is connected to all
nodes in a cluster but is only accessed by a single VM.
This storage
design can be challenging in the area of performance monitoring and scaling.
Because shared datastores serve the aggregated I/O demands of multiple VMs,
this architecture doesn’t natively allow a storage array to identify the I/O
load generated by an individual VM.
SPANNED VMFS DATASTORES
Vmware
provides the ability of VMFS extents to concatenate multiple LUN into a single logical
Datastore, which is referred as a spanned datastore. Although the spanned
Datastore can overcome the 2TB lun size limit, but it will affect the
performance of the lun, because each size lun have the capacity to handle the I/Ops.
NetApp does
not recommend the spanned datstores.
NFS DATASTORE.
vSphere
allows customer to leverage enterprise-class NFS array to provide datastores to
concurrent access to all of the node in ESX cluster. The access method is very similar
to one with VMFS.
Deploying
VMware with the NetApp advanced NFS results in a high-performance,
easy-to-manage
Implementation
that provides VM-to-datastore ratios that cannot be accomplished with other
storage protocols such as FC. This architecture can result in a 10x increase in
datastore density with a correlating reduction in the number of datastores.
With NFS, the virtual infrastructure receives operational savings because there
are fewer storage pools to provision, manage, back up, replicate, and so on.
SAN RAW DEVICE MAPPING
ESX gives
VMs direct access to LUN for specific –use case such as P2V clustering or
storage vendor management tool, this type of access is called raw device
mapping and it support FC,iSCSI,FCoE protocol, In this design the ESX act as a
connection proxy between the VM and storage array. RDM provides direct LUN
access to the host so that they can achieve high individual disk I/O
performance and can be easily monitored for the disk performance.
RDM LUNS ON NETAPP
RDM is
available in two modes that are physical and virtual. Both mode support the key
vmware features such as vmotion and can be used in both HA and DRS cluster.
NetApp
enhances the use of RDMs by providing array-based LUN-level thin provisioning,
production-use data deduplication, advanced integration components such as
SnapDrive, application-specific Snapshot backups with the SnapManager for
applications suite, and FlexClone zero-cost cloning of RDM-based datasets.
Note:
VMs running MSCS must use the path selection policy of Most Recently
Used (MRU). Round
Robin is
documented by VMware as unsupported for RDM LUNs for MSCS.
Datastore supported features
Capability/Feature
|
FC/FCoE
|
iSCSI
|
NFS
|
Format
|
VMFS or RDM
|
VMFS or RDM
|
NetApp WAFL
|
Maximum numbers of Datastore or LUNs
|
256
|
256
|
64
|
Maximum Datastore size
|
64TB
|
64TB
|
16TB or 100TB*
|
Maximum LUN/NAS file system size
|
2TB minus 512bytes
|
2TB minus 512 bytes
|
16TB or 100TB*
|
Optimal queue depth per LUN/file system
|
64
|
64
|
N/A
|
Available link speeds
|
4 and 8Gb FC and 10GbE
|
1 and 10GbE
|
1 and 10GbE
|
*100TB
requires 64 bit aggregates.
No comments:
Post a Comment