Wednesday, 11 April 2012

Some interesting cmd of netapp


Some interesting cmd of netapp.
mkfile : this cmd is used to make the file on netapp, we can create file on any volume of netapp by this cmd .
the below is the example of this cmd.
vipul1*> mkfile 10kb  /vol/vipul/vipultree/uk
vipul1*> ls /vol/vipul/vipultree
uk
coverletter.txt
Data Migration Clariion to VNX using SAN Copy.pdf
Data Migration Clariion to VNX using SAN Copy1.pdf
Data Migration Clariion to VNX using SAN Copy2.pdf
rdfile : rdfile cmd is used to read the content of the file , so by this cmd we can see the content of any file in any volume.
For example:
vipul1*> rdfile /etc/uk
Vipul
dd: this cmd is used to copy the content of file data from on file to another, this cmd can be used in case of your ndmpcopy is not working.
For example:
vipul1*> dd if=/vol/vol0/etc/uk of=/vol/vol0/etc/vk
vipul1*> rdfile /etc/vk
Vipul
ls: this cmd is used to list the content of the directory .
for example.
vipul1*> ls /vol/vipul/vipultree
uk
coverletter.txt
Data Migration Clariion to VNX using SAN Copy.pdf
Data Migration Clariion to VNX using SAN Copy1.pdf
Data Migration Clariion to VNX using SAN Copy2.pdf
mv: this cmd is used to replace the old file to the new file this work in same volume but not between different volume
for example.
vipul1*> mv /vol/vol0/etc/vk /vol/vol0/etc/uk
vm_stat: gives the output of the wafl space allocation, gives stats of the wafl
for example.
vipul1*> vm_stat
System
        Total Pages in System: 130816
        Total Pages Allocated: 130567
        Total Free Pages: 249
        Non-WAFL Free Pages: 0
        WAFL Free Pages: 249
WAFL
        Pages From WAFL: 8867
        Pages Returned To WAFL: 2668
        Failures while stealing from WAFL: 0
        Times Pages stolen immediately: 8867
        Free Pages in WAFL: 7427
 Free buffers in WAFL: 74278
        WAFL recycled bufs: 3661
Sleep/Wakes
        Times thread slept for pages: 60
        Times woken up for pages: 60
        Times PTE is alloced while sleeping: 0
Hseg
             <8k   <16k   <64k  <512k   <2MB   <8MB  <16MB    big chunks        bytes
alloc          0    237    167    107      2      0      0      1    514     43069440
active         0      3      0      0      0      0      1      0      4      9359360
backup         0      0      0      0      0      0      0      0      0            0

Buffers MemoryPool
      1 portmap
      1 portmap
rm: this cmd is used to delete a file from the qtree.
For example.
vipul1*> ls /vol/vipul/vipultree
uk
coverletter.txt
vipul1*> rm /vol/vipul/vipultree/uk
vipul1*> ls /vol/vipul/vipultree
coverletter.txt
filersio: this cmd is used for the testing purpose , you can run this cmd and test your filer performance and to see if there is any issue.
For example.
vipul1*> filersio asyncio_active 50 -r 50 4 0 10m 60 5 /vol/vol0/filersio.test -                                                                                        create -print_stats 5
filersio: workload initiated asynchronously. Results will be displayed on the
console after completion
vipul1*> filersio: starting workload asyncio_active, instance 0

Read I/Os       Avg. read       Max. read       Write I/Os      Avg. write     M                                                                                        ax write
                latency(ms)     latency(ms)                     latency(ms)    l                                                                                        atency(ms)
16898           0               149             16926           1              8                                                                                        21
8610            0               22              8571            3              2                                                                                        641
5910            0               966             5715            5              2                                                                                        760
11449           0               17              11431           2              2                                                                                        500
11368           0               65              11426           2              2                                                                                        321
Wed Apr 11 18:00:00 IST [kern.uptime.filer:info]:   6:00pm up  2:41 0 NFS ops, 0                                                                                         CIFS ops, 0 HTTP ops, 0 FCP ops, 98 iSCSI ops
14116           0               18              13952           1              1                                                                                        151
8363            0               11              8580            2              2                                                                                        699
18068           0               31              17934           1              2                                                                                        780
10279           0               24              10292           2              1                                                                                        180
5690            0               15              5653            1              1                                                                                        399
Statistics for active_active model, instance 0
Running for 61s
Total read latency(ms)  31531
Read I/Os               113608
Avg. read IOPS          1862
Avg. read latency(ms)   0
Max read latency(ms)    966
Total write latency(ms) 275993
Write I/Os              113392
Avg. write IOPS         1858
Avg. write latency(ms)  2
Max write latency(ms)   3450
filersio: instance 0: workload completed successfully
hammer: this cmd again is good cmd for thesting the performance of your filer but it utilize the cpu power so much. It actually hammer the filer and record its performance. This cmd should be run under some netapp expert, don’t run it simply , because this is dangerous cmd and can create any panic to your filer.
For ecample;
vipul1*> hammer
usage: hammer [abort|pause|restart|status|
       [-f]<# Runs><fileName><# BlocksInFile> (<# Runs> == -1 runs hammer forever)|
       fill <writeSize> (use all available disk space)]
vipul1*> hammer -f 5 /vol/vol0/hammer.txt 400
vipul1*> Wed Apr 11 18:08:18 IST [blacksmith:warning]: blacksmith #0: Starting work.
Wed Apr 11 18:08:25 IST [blacksmith:info]: blacksmith #0: No errors detected. Stopping work
getXXbyYY: this cmd is very useful cmd to find out the information of user, host etc.. this cmd is used to take out the information from the filer , just look its sub cmd and then you will come to know that how useful this cmd is.
vipul1*> getXXbyYY help
usage: getXXbyYY <sub-command> <name>
Where sub-command is one of
gethostbyname_r - Resolves host name to IP address from configured DNS server, same as nslookup
gethostbyaddr_r - Retrieves IP address for host name from configured DNS server, same as reverse lookup
netgrp - Checks group membership for given host from LDAP/Files/NIS
getspwbyname_r - Displays user information using shadow file
getpwbyname_r - Displays user information including encrypted password from LDAP/Files/NIS
getpwbyuid_r - Same as above however you provide uid in this command rather than user name
getgrbyname - Displays group name and gid from LDAP/Files/NIS
getgrbygid - Same as above however you provide gid in this command rather than group name
getgrlist - Shows given user's gid from LDAP/Files/NIS
For more information, try 'man na_getXXbyYY'
vipul1*> getXXbyYY gethostbyname_r root
host entry for root not found: Host not found (authoritative)
vipul1*> getXXbyYY gethostbyname_r vipul
name: vipul
aliases:
IPv4 addresses: 192.168.1.14

All the above cmd  will only run either in diag mode or in advanced mode but will not run in normal mode.
Hope this blog will help you to play with some of the netapp hidden cmd.




Monday, 9 April 2012

Fractional reserve & space reserve


Fractional reserve:
Fractional reserve is the volume options which reserve the space inside the volume for the snapshot overwrites. By default it is 100%, but by autodelete functionality NetApp recommend to keep the fractional reserve to =0, as soon as the first snapshot copy is created the NetApp automatically creates the fractional reserve and start using it only when whole volume space gets filled. This reserved space is used only when the volume is 100% full.
The amount of reserve space can be seen by –r options in df cmd.

Snap reserve:
Snap reserve is the space reserved for storing the snap shot copies as the snapshot copies also required space to get stored so they use the snap reserve space, by default it is20% of the volume space. As the snap reserve space gets filled it automatically start using the space from the volume. So we can say snap reserve is actually the logical separation space from the volume.

Space reclamation:
Space reclamation is the process of reclaiming back the space of the lun on storage side. For exp suppose we have filled the 100 gb lun with 50 gb of the data so the storage usage of lun in host side can be seen as 50% utilized and in storage side also it will show 50% utilized , but when we delete the data from the lun for exp we deleted all the 50 gb of data from the lun , the storage utilization on host side will be shown as 0% utilized but from storage end it will show 50% utilized , so we use the space reclamation to reclaim back the free space on storage side. Snapdrive work good on reclaiming back the data from storage end.

Lun reservation:
LUN reservation (not to be confused with SCSI2 or 3 logical unit locking reservations) determines when space for the LUN is reserved or allocated from the volume. With reservations enabled (default) the space is subtracted from the volume total when the LUN is created. For example, if a 20GB LUN is created in a volume having 80GB of free space, the free space will go to 60GB free space at the time the LUN is created even though no writes have been performed to the LUN. If reservations are disabled, space is first taken out of the volume as writes to the LUN are performed. If the 20GB LUN was created without LUN space reservation enabled, the free space in the volume would remain at 80GB and would only go down as data was written to the LUN.  

Tuesday, 3 April 2012

iSCSI


iSCSI
iscsi is a protocol that runs on top of standard TCP/IP networks. Iscsi uses the Ethernet cable to communicate to the hosts, so it is cheaper than FC protocol, because FC  cable are costlier than the Ethernet cable.
In iscsi you should be clear about the initiator and target terms, you should be knowing that what is initiator is  and what is target is.
Initiators and targets
Initiator which initiates the service or initiator which initiates the conversation between your host computer and storage device means the Ethernet port of the host is initiator and the target port is which accept the services means the storage Ethernet port are the target ports.
IQN
One more thing to understand is the iqn number each iscsi port has its own iqn number iscsi initiator service in host automatically creates iqn number and iscsi target in storage has its own iqn number , so if you change the hostname of the storage may be the iqn number of that storage will get change. The conclusion is that the iscsi ports have their own iqn number and they are unique.
DataDomain or Domain
In a basic iSCSI SAN, a storage array advertises its SCSI LUNs to the net-work (the targets), and clients run an iSCSI driver (the initiators) that looks for those LUNs. In a larger setup with, say, fifty or more clients or storage devices or both, you probably don’t want every client to see every storage device. It makes sense to block off what each host can see and which storage devices they have the potential of using. This is accomplished by registering the names of the initiators and targets in a central location, and then pairing them into groups. A logical grouping, called a data domain, partitions the registered initiators and targets into more manageable group.

Monday, 2 April 2012

Setting up a Brocade Switch.


  Setting up a Brocade Switch.

Step1: setting up the username and password
 The default username is admin and the default password of the brocade switch is password
Step 1: Name the switch by “switchName” cmd
Step2: Set the Domain ID.
To set the domain id you need to first disable the switch so when you disable the switch all the port which are showing the green light will turn to the amber light.
The cmd to disable the switch is “switchDisable
And then enter the “configure”cmd to configure the switch.
When you enter the configure cmd you will be asked number of questions so you need to put “yes”  in “Fabric parameters” and rest set default and set domain id in domain id section like if you need to set the domain id as 1 then put 1 in front of the domain id section.
Step3: enable the switch with its new domin id.
Enter the “switchEnable” cmd after you did the configuring of the switch after you enter the switchEnable cmd the switch will reboot.
Then after reboot enter the “logout”cmd to logout from the switch.

How to create Zone in Brocade switch.



How to create Zone in Brocade switch.

Terminology
HBA - Host Bus Adapter, which in this case, refers to the Fibre Channel Card. In LAN networking, it’s analogous to an Ethernet card.
WWN - World Wide Name, a unique 8-byte number identifying the HBA. In Ethernet networking, it’s analogous to the MAC address.
FC Zone - Fibre Channel Zone, a partitioned subset of the fabric. Members of a zone are allowed to communicate with each other, but devices are not allowed to communicate across zones. An FC Zone is loosely analogous to a VLAN.
Steps to Zone Brocade Switch
  1. Plug in the FC Connector into an open port on the switch.
  2. Login to the server and verify the HBA connection. It should see the switch but not the storage device.
  3. Login to the Brocade Switch GUI interface. You’ll need Java enabled on your browser.
  4. Check the Brocade Switch Port.
    1. On the visual depiction of the switch, click on the port where you plugged in the FC connector.
    2. The Port Administration Services screen should pop up. You’ll need to enable the pop-up.
    3. Verify that the Port Status is “Online”. Note the port number.
    4. Close the Port Administration Services screen.
  5. Find the WWN of your new device
    1. Navigate back to the original GUI page.
    2. Select Zone Admin, an icon on the bottom left of the screen. It looks like two squares and a rectangle.
    3. Expand the Ports & Attaching Devices under the Member Selection List.
    4. Expand the appropriate port number. Note the attached WWN.
  6. Create a new alias for this device
    1. Click New Alias button
    2. Follow menu instructions
  7. Add the appropriate WWN to the alias
    1. Select your new device name from the Name drop down menu
    2. Expand the WWNs under Member Selection List
    3. Highlight the appropriate WWN
    4. Select Add Member
  8. Add the alias to the appropriate zone
    1. Select the Zone tab
    2. Select the appropriate zone from the Name drop down menu
    3. Select the appropriate alias from the Member Selection List
    4. Click Add Member
  9. Ensure that the zone is in Zone Config in the Zone Config tab
  10. Save your changes by selecting ZoningActions -> Enable Config
  11. Login back in to the server to verify. It should now see the storage devices.

NetApp SnapVault


Snap vault
Snapvault is a heterogeneous disk –to-disk backup solutions for Netapp filers and systems with other OS (solaris, HP-UX, AIX, windows, and linux) . in event of data loss or corruption on a filer, backed up data can be restored from the snap vault secondary storage system with less downtime and less of the uncertainly associated with conventional tape backup and restore operations . snapshot technology is used for the snap vault operation.
Snap vault can be done beteween the two system either netapp filer to filer or from unix/win servers to netapp filer. While doing the snapvault from server to filer we need to install the snapvault agent on server.the snapvault agent name is ossv(open system snap vault).
Snapvault technology works on snap vault client and snapvault server model means the client is whose data is to be backedup and the snapvault server is where the client data will get backed up.
Snapvault works on snapshot technology first the baseline transfer will happen and then the incremental backup will happen. The snapshot of the data will get backedup and the retention of the backed data is also simple, we need to mount the backup volume via nfs or cifs and then copy the data.
Snapvault reguired two licenses one is for the primary site and one is for the secondary site.

Steps to configure the snapvault between the two netapp storage.
Step 1. Add the license on primary filer and secondary filer.
Filer1> license add xxxxxxx
Filer2> license add xxxxxxx
Step 2. Enable the snapvault on the primary filer and do the entry on the primary filer of secondary filer.
Filer1> options snapvault.enable on
Filer1> options snapvault.access host=filer2
Step 3. Enable the snapvault on the secondary filer and do the entry on the secondary filer of primary filer.
Filer2> options snapvault.enable on
Filer2> options snapvault.access host=filer1
Now let the destination volume name is vipuldest where all the backups are done on filer2 and the source volume name is vipulsource whose backup is to be taken on filer1  , so filer2> /vol/vipuldest and for the primary filer filer1>/vol/vipulsource/qtree1.
Step 4.  We need to disable the snapshot schedule on the destination volume. Snap vault will manage the destination snapshot schedule.
Filer2> snap sched vipuldest 0 0 0
Step 5.  Do the initial baseline backup
Filer2> snapvault start –S filer1:/vol/vipulsource/qtree1 filer2:/vol/vipuldest/qtree1
Step 6. Creating the schedule for the snapvault backup on source and destination filer
On source we will create less number of retention schedules and destination we can create more number of retention schedules.
On source we will create 2 hourly, 2daily, 2weekly and on destination we will create 6 hourly, 14daily, 6 weekly.
Note: the snapshot name should be prefixed by “sv_”
Filer1> snapvault snap sched vipulsource sv_hourly 2@0-22
Filer1> snapvault snap sched vipulsource sv_hourly 2@23
Filer1> snapvault snap sched vipulsource sv_weekly 2@21@sun.
Step 7. Make the schedule on the destinations.
Filer2> snapvault snap sched vipuldest sv_hourly 6@0-22
Filer2> snapvault snap sched vipuldest sv_hourly 14@23@sun-fri
Filer2> snapvault snap sched vipuldest sv_weekly 6@23@sun.

To check the status use the snapvault status  cmd either on source or on destinations.