Saturday, 11 August 2012

How to get user cifs client information by using "cifs top" cmd


Cifs TOP command
The cifs top command is used to display the cifs clients activity based on the number of different criteria. It can display which clients are generating large amounts of load, as well as help identify clients that may be behaving suspiciously.
This command relies on data collected when the cifs.per_client_stats.enable option is “on”, so it must be used in conjunction with that option. Administrator should be aware that there is overhead associated with collecting the per-client stats. This overhead may noticeably affect the storage system performance.
Options
-s <sort> specifies how the client stats are to be sorted. Possible values of <sort>. Are ops, read, writes, iops and suspicious. These values may be abbreviated to the first character, and the default is ops. They are interpreted as follows.
ops sort by number of operations per second of any type.
suspicious sort by the number of “suspicipus” events sent per second by each client. “suspicious” events are any of the following which are typical of the patterns seen when viruses or other badly behaved software or users are attacking a system.
For ex:
Cifs top –n 3 –s w
If vfiler volumes are licensed, the per-user statistics are only available when in a vfiler context. This means the cifs top command must be invoked in a vfiler context.
For ex:
System> vfiler run vfiler0 cifs top.

NFS: Network File System


NFS: Network File System
NFS is a widely used protocol for sharing files across networks.  It is designed to be stateless to allow for easy recovery in the event of server failure.
As a file server, the storage system provides services that include mount daemon (mountd) , Network lock Manager(nlm_main), Network file system daemon(nfsd), status monitor(sm_l_main), quota daemon (rquot_l_main), and portmap or rpcbind. Each of these services is required for a successful operation of an NFS process.
By updating the /etc/fstab file for persistent mounting of the file system across reboots. And we can mount by running the automounter services that mounts the file system on demand and unmounts the file if they are not accessed within a few minutes.
What does it mean for a protocol to be stateful or stateless?
If a protocol is stateless, it means that it does not require that the server maintain any session state between messages; instead, all session states are maintained by the client. With a stateless protocol, each request from client to server must contain all of the information necessary to understand the request and cannot take advantage of any stored context on the server. Although NFS is considered a stateless protocol in theory, it is not stateless in practice.
NIS: Network information service: Provide a simple network lookup service consisting of database and processes. Its purpose is to provide information that has to be known throughout the network, to all machines on the network. Information likely to be distributed by NIS is:
1.       Login names/passwords/home directories (/etc/passwd)
2.       Group information(/etc/group)
3.       Host names and IP numbers(/etc/hosts)
Some of the command in netapp storage for troubleshooting the NIS.
1.       Ypcat mapname: prints all of the values in the NIS map, which is provided.
2.       Ypgroup : Displays the group file entries that have been locally cached from the NIS server.
3.       Ypmatch key mapname: prints every value in the NIS map mapname whose key matches one of the keys given.
4.       Ypwhich : prints the name of the current NIS server if NIS is enabled.
Some cmd for the troubleshooting the NFS
1.       Keep “options nfs.export.auto –update in on mode” so that /etc/exports file is automatically updated when volume is created, renamed, destroyed.
2.       exportfs : Display all current exports in memory.
3.       exportfs  –p [options] path : Adds exports to the /etc/export file and in memory.
4.       exportfs  –r :reloads only exports from /etc/exports files
5.       exportfs  –uav :unexports all export
6.       exportfs  –u  [path]: unexport a specific export
7.       exportfs   -z [path]: unexports an export and removes it from /etc/exports
8.       exportfs   -s pathname: To verify the actual path to which a volume is exported.
9.       exportfs   -q pathname: Display export options per file system path
NOTE: Be careful not to export resources with the -anon option. If NFS is licensed on the storage system, and you specify exports with the -anon option, everyone is able to mount the resource and could cause the security risk.
WAFL credential cache
The wafl credential cache(WCC) contains the cached user mappings for the unix user identities(UID andGID) to windows identities (SID for users and groups).After a UNIX-to-Windows user mapping is performed (including group membership) the result are stored in the WCC.
The wcc command does not look in the WCC , but performs a current user mapping operation and display the result . This command is useful for troubleshooting user mapping issues.
NOTE: the cifs.trace_login options must be enabled.
To troubleshoot the NFS performance issues some data collections cmd.
1.       nfsstat: display statistical information about NFS and remote procedure call(RPC) for storage system .
Syntax: nfstat   <interval>  [ip_address |name] {-h , -l , -z , -c , -t, -d, -C}
Can display interval or continuous display of statistics summary.
Perclient stats can be collected and display via nfs.per_client_stats.enable
If optional IP address or host name is specified with –h option, that client’s statistic are displayed.
nfsstat output with –d:  The nfsstat  -d command display reply cache statistics as well as incoming messages, include allocated mbufs. This diagnostic option allow for debugging of all NFS-related traffic on the network.
2.       NFS mount Monitoring: nfs mountd traces enables tracing of denied mount request against the storage system.
Enable option only during debug session as there is a possibility of numerous syslog hits during DOS attacks.
Enter the following cmd
Options nfs .mounted.trace on


Thursday, 2 August 2012

TYPES OF VMWARE DATASTORES


TYPES OF VMWARE DATASTORES
An introduction to storage in virtual infrastructure
Vmware ESX supports three type of storage configuration when connecting to the shared storage array:
VMFS: virtual machine files system Datastore.
NAS: Network attached storage Datastore.
RDM: Raw device mapping Datastore.
The shared storage is required for the HA (high –availability), DRS (distributed resource scheduler), Vmotion and fault tolerance.

The 80/20 rule:
This rule is well known rule when we design virtual data center. This 80/20 rule means that the 80% of all system virtualized are of consolidation efforts. The remaining 20% of the system are classified as the business critical application. Although these applications can be virtualized successfully, they tend to be deployed on shared storage pools but in what we refer to as isolated dataset.

THE CHARACTERISTICS OF CONSOLIDATION DATASETS

Consolidation datasets have the following characteristics:
• The VMs do not require application-specific backup and restore agents.
• The dataset is the largest in terms of the number of VMs and potentially the total amount of storage
-addressed.
• Individually, each VM might not address a large dataset or have demanding IOP requirements; --however, the collective whole might be considerable.
• These datasets are ideally served by large, shared, policy-driven storage pools (or datastores).

THE CHARACTERISTICS OF ISOLATED DATASETS (FOR BUSINESS-CRITICAL APPLICATIONS)

Isolated datasets have the following characteristics:
• The VMs require application-specific backup and restore agents.
• Each individual VM might address a large amount of storage and/or have high I/O requirements.
• Storage design and planning apply in the same way as with physical servers.
• These datasets are ideally served by individual, high-performing, nonshared datastores.

Consolidated datasets work well with Network File System (NFS) datastores because this design provides greater flexibility in terms of capacity than SAN datastores when managing hundreds or thousands of VMs. Isolated datasets run well on all storage protocols; however, some tools or applications might have restrictions around compatibility with NFS and/or VMFS.
Unless your data center is globally unique, the evolution of your data center from physical to virtual will follow the 80/20 rule. In addition, the native multiprotocol capabilities of NetApp and VMware will allow you to virtualize more systems more quickly and easily than you could with a traditional storage array platform.


 VMFS DATASTORES
The Vmware VMFS is high-performance clustered file system that provides datastores, which are shared storage pools. VMFS Datastore can be configured with logical unit numbers (LUN) accessed by FC, iSCSI and FCoE. VMFS allows traditionally LUN accessed simultaneously by every ESX server in cluster.
Applications traditionally require storage considerations to make sure their performance can be virtualized and served by VMFS. With these types of deployments, NetApp recommends deploying the virtual disks on a datastore that is connected to all nodes in a cluster but is only accessed by a single VM.

This storage design can be challenging in the area of performance monitoring and scaling. Because shared datastores serve the aggregated I/O demands of multiple VMs, this architecture doesn’t natively allow a storage array to identify the I/O load generated by an individual VM.

SPANNED VMFS DATASTORES
Vmware provides the ability of VMFS extents to concatenate multiple LUN into a single logical Datastore, which is referred as a spanned datastore. Although the spanned Datastore can overcome the 2TB lun size limit, but it will affect the performance of the lun, because each size lun have the capacity to handle the I/Ops.
NetApp does not recommend the spanned datstores.

NFS DATASTORE.
vSphere allows customer to leverage enterprise-class NFS array to provide datastores to concurrent access to all of the node in ESX cluster. The access method is very similar to one with VMFS.

Deploying VMware with the NetApp advanced NFS results in a high-performance, easy-to-manage
Implementation that provides VM-to-datastore ratios that cannot be accomplished with other storage protocols such as FC. This architecture can result in a 10x increase in datastore density with a correlating reduction in the number of datastores. With NFS, the virtual infrastructure receives operational savings because there are fewer storage pools to provision, manage, back up, replicate, and so on.

SAN RAW DEVICE MAPPING
ESX gives VMs direct access to LUN for specific –use case such as P2V clustering or storage vendor management tool, this type of access is called raw device mapping and it support FC,iSCSI,FCoE protocol, In this design the ESX act as a connection proxy between the VM and storage array. RDM provides direct LUN access to the host so that they can achieve high individual disk I/O performance and can be easily monitored for the disk performance.

RDM LUNS ON NETAPP
RDM is available in two modes that are physical and virtual. Both mode support the key vmware features such as vmotion and can be used in both HA and DRS cluster.

NetApp enhances the use of RDMs by providing array-based LUN-level thin provisioning, production-use data deduplication, advanced integration components such as SnapDrive, application-specific Snapshot backups with the SnapManager for applications suite, and FlexClone zero-cost cloning of RDM-based datasets.
Note:  VMs running MSCS must use the path selection policy of Most Recently Used (MRU). Round
Robin is documented by VMware as unsupported for RDM LUNs for MSCS.

Datastore supported features

Capability/Feature
FC/FCoE
iSCSI
NFS
Format
VMFS or RDM
VMFS or RDM
NetApp WAFL
Maximum numbers of Datastore or LUNs
256
256
64
Maximum Datastore size
64TB
64TB
16TB or 100TB*
Maximum LUN/NAS file system size
2TB minus 512bytes
2TB minus 512 bytes
16TB or 100TB*
Optimal queue depth per LUN/file system
64
64
N/A
Available link speeds
4 and 8Gb FC and 10GbE
1 and 10GbE
1 and 10GbE
*100TB requires 64 bit aggregates.