Monday, 23 January 2012

Initial configuration of NetApp filer


Hello Friend as I told you that I will be writing the separate blog on how to install or configure the new netapp storage system.
Once you power on the storage system it will start the boot process and then it will say press Ctrl+c for the maintenance mode , don’t press it and then it will start booting , during booting it will ask for the initial data which you need to put. I will discuss all this things below.

Steps
1. Please enter the new hostname.
You can name this host whatever you wish (for example, host1).
2. Do you want to configure interface groups?
You can type either y or n at this prompt.
If you type “yes” Then you are Prompted to enter additional configuration information for each of the interface groups.
These prompts are:
• Number of interface groups to configure.
• Name of interface group..
• Is interface_group_name a single [s], multi [m] or a lacp
[l] interface group?
• Number of links for interface_group_name.
• Name of link for interface_group_name.
If you have additional links, you should also enter their names here.
• IP address for interface_group_name.
• Netmask for interface_group_name.
• Should interface group interface_group_name take over a
partner interface group during failover?
• Media type for interface_group_name.
 If you typed “no”  Directed to the next prompt.
3. Please enter the IP address for Network Interface e0a
Enter the correct IP address for the network interface that connects the storage system to your
network (for example, 192.168.1.1).
4. Please enter the netmask for Network Interface e0a.
After entering the IP address, you need to enter the netmask for your network (for example,
255.255.255.0).
5. Should interface e0a take over a partner IP address during failover?
Type either “y” or “n” at this prompt.
If you type “yes” Then you are...
 Prompted to enter the address or interface name to be taken over by e0a.
Note: If you type y, you must already have purchased a license for controller failover to
enable this function.
 If you type “no” then Directed to the next prompt.
6. Please enter media type for e0a (100tx-fd, tp-fd, 100tx, tp, auto
(10/100/1000))
Enter the media type that this interface should use.
7. Please enter flow control for e0a {none, receive, send, full} [full]
Enter the flow control option that this interface should use.
Setting up your storage system for using native disk shelves
8. Do you want e0a to support jumbo frames? [n]
Specify whether you want this interface to support jumbo frames.
9. Continue to enter network parameter values for each network interface when prompted.
10. Would you like to continue setup through the Web interface?
If you enter “yes”
Continue setup with the Setup Wizard in a Web browser.
If you type “no”
Continue to use the command-line interface.
Proceed to the next step.
11. Please enter the name or IP address of the default gateway.
Enter the primary gateway that is used to route outbound network traffic.
13. Please enter the name or IP address for administrative host.
The administration host is given root access to the storage system's /etc files for system administration.
To allow /etc root access to all NFS clients enter RETURN below.
Attention: If you change the name or IP address of an administration host on a storage system
that has already been set up and configured, the /etc/exports files will be overwritten on system
reboot.
14. Please enter the IP address for (name of admin host).
Enter the IP address of the administration host you specified earlier (for example, 192.175.4.1).
Note: The name listed here is the name of the host entered in the previous step.
15. Please enter timezone
GMT is the default setting. Select a valid value for your time zone and enter it here.
16. Where is the filer located?
This is the actual physical location where the storage system resides (for example, Bldg. 4, Floor
2, Room 216) .
17. What language will be used for multiprotocol files?
Enter the language.
18. Enter the root directory for HTTP files
This is the root directory for the files that the storage system will serve through HTTP or HTTPS.
19. Do you want to run DNS resolver?
If you type y at this prompt, you need the DNS domain name and associated IP address.
20. Do you want to run NIS client?
If you type y at this prompt, you will be prompted to enter the name of the NIS domain and the
NIS servers.
When you have finished with the NIS prompts, you see an advisory message regarding
AutoSupport and you are prompted to continue.
21. Would you like to configure the BMC LAN interface ?
If you have a BMC installed in your system and you want to use it, type y at the prompt and enter
the BMC values you collected.
22. Would you like to configure the RLM LAN interface ?
If you have an RLM installed in your system and you want to use it, type y at the prompt and
enter the RLM values you collected.
23. Do you want to configure the Shelf Alternate Control Path Management
interface for SAS shelves ?
If you are planning to attach DS4243 disk shelves to your system, type y at the prompt and enter
the ACP values you collected.
24. Setting the administrative (root) password for new_system_name...
New password:
Retype new password:
Enter the new root password.
25. When setup is complete, to transfer the information you've entered to the storage system, enter
the following command, as directed by the prompt on the screen.
reboot
Attention: If you do not enter reboot, the information you entered does not take effect and is
lost.


Tech refresh From Fas2020 To Fas2040

Hello Friend today I want to write about the project   which I got of NetApp tech refresh from Fas2020 to Fas 2040
With one disk shelf DS4243 and the mistake which I did and the things which I learned by those mistake
I want to share it. So you all can learn from those mistakes.

The Task: I need to replace the Fas2020 fully populated with its internal disk, with Fas2040 dual controller and then add one new diskshelf DS4243 half populated. And I want to move all the internal disk from Fas2020 (single controller) to the Fas 2040 (dual controller) , and then I need to move all the volumes from one controller to the other and there was three volumes with luns which I need to remap.
There was one Aggr named Aggr0 with 4 volmes and in which 3 volumes were having luns, and as I told that all the volumes were 97% filled. Why I am mentioning about volumes 97% filled because I really faced lot of problem while migrating it from one filer to another.
I was told to give  all the fas 2020 internal disk to fas2040 one controller and the new disk shelf disk to the other controller.
I hope you all understood the the task which I got , (friend my English is not so good so I am really sorry if any of you are not able to understand what I wrote).
Solution: I will explain each thing Phase by Phase .
Phase 1: First I Thought of upgrading the Fas2020 of equivalent version of Fas2040.
So I upgraded the ontap version from Fas7.3.3 to 7.3.6
After upgrading I checked every thing was fine and then I halted the system.
Phase 2: unplugged the wire connection of Fas2020 and then un mounted the Fas2020 and then mounted the new Fas2040 and then mounted the Disk shelf DS4243.
And then removed all the disk from Fas2020 and inserted it to Fas2040.
Then cabled Fas2040 and the disk shelf , first power  the diskshelf and then set it shelf id and rebooted it.
Then I booted the Fas2040 in maintaince mode and assigned all the internal disk of Fas2020 to one of the controller of Fas2040.
I hope you know how to assign the disk to new controller. Well I will write the cmd also.
Disk reassign –S old system id –d new system id
So now the all the old fas2020 disk got assigned to the new Fas2040 .
And then halted the system to return from the maintaince mode.
And then again booted the system and all the configuration and every thing came as it is to the new controller and all the volumes and aggr was detected by the new system the host name also came as it is.
Then we attached the new disk shelf to the Fas2040 and then assigned the all the new disk to the other controller.
And configured the new controller as we do the initial configuration. I hope you all know how to do the initial configuration. I don’t want to mention it here as I will be writing the new blog to how do the initial configuration of the new filer and will mention each and every option.

So coming back , as I did the initial configuration of the new filer , installed the licenses .

Phase 3: Data migration Phase.
Created all the 4 volumes of same size to the new filer
Created the snapmirror relation ship between two filer because we were doing the data migration by snapmirror.
But the snap mirror was not working , it was giving the error of “ The snapmirror is misconfigured or the source volume may be busy”
We were not able to understand this error,
So what we did is we changed the “options snapmirror.access  legacy”  to the  “options snapmirror.access  host=ipaddress of the source in destination filer and ipaddress of destination in source filer”.
And then after doing this we did again the snapmirror and then the snapmirror of one volume happened which was of 2 gb .( it got finished within no time.)
Then we tried to do the snapmirror of 2nd volume which was of 343 gb, and again we got the same error which we was getting before.
Here we struggled lot to figure out the problem, but we found later and resolved it. As I was mentioning before that volume were 97% filled, and there was around no space leftout in aggr  also. So because of that that volume was no able to take out its initial snapshot, because of no space,( may be that is the problem ).
So what we did we executed one cmd for that volume“vol options /vol/vol_name fractional_reserve 0
And after running that cmd for that volume we again tried to manaually take the snapshot of that volume and that snapshot got created.
So again we created the snapmirror relationship and this time the transferring started happening.
Same thing we did for the other volume and even the snapmirror started for the 3rd volume and so all the 4 volume got transferd from one filer to the other.
Phase 4: mapping the new transferd volumes and lun.
As I told you that three lun was there in three volumes so we need to map those lun back.
And these lun were given to the Vmware so these lun were the datastore in the vmware.
We broke the snapmirror  relationship and then made the source volume offline and then made the destination volume online .
Created the igroup and mapped the Igroup .
But when we tried to search that disk from vmware side we were not able to find those disk. ( again new problem arose)
Then we checked the FCP connection by cmd “fcadmin config” we showed the both the port were down so we checked the FC cabling were done properly or not, so we tried to manually bring the port up.
it gave the error start the fcp services so we tried to start the fcp services but the services was not starting so what we did we checked the license of both the filer they were different so we made it to same license on both the filer.(cluster was enabled)
So again we tried to start the services and then we got the error that fcp is misconfigured and fcp cf mode is misconfigured in its partner.
Then we checked “fcp show cfmode
and found that one filer is in “single_image” and one is in “standby
so we went to the advanced mode and changed the setting by typing the
 cmd “fcp set cfmode Single_image” on the filer where it was set to “standby” mode.
And then by setting it we again run the fcp services it got started and then the port came online and then these lun were visible to vmware.
Thanks the problem got resolved.
And project got completed successfully,




Tuesday, 17 January 2012

procedure to download & upgrade the netapp disk firmware .


Please find the below procedure to download the latest disk firmware from NetApp now site.

1.           Log in to NetApp now site. “now.netapp.com”
2.           Go to the Download tab.
3.          Under Download tab you will find the Firmware tab
4.          Under Firmware tab you will find the Disk Drive & Firmware matrix tab click on that tab.

Procedure to upgrade the Disk firmware.

Note 1: Schedule the disk firmware update during times of minimal usage of the filer as this activity is intrusive to the normal processing of disk I/O.

Note 2: Since updating the disk firmware involves spinning the drive down and back up again, it is possible that a volume can become offline if upgrading 2 or more of the same drive types in the same RAID group. This is because when you upgrade disk firmware, you are upgrading the disk firmware of all the same drive types in parallel. When encountering a situation like this, it is best to schedule a maintenance window to upgrade the disk firmware for all disks.

Option #1 - Upgrade the Disk Firmware in the Background

1.        Check or set the raid.background_disk_fw_update option to on
To Check:         options raid.background_disk_fw_update.enable
To Set:              option raid.background_disk_fw_update.enable on
2.       Place the new disk firmware File into the /etc/disk_fw directory

The system will recognize the new available version and will non-disruptively upgrade all the disks requiring a firmware update to that new version in the background.

Option #2 – Upgrade the disk firmware during a system reboot
1. Check or set the raid.background_disk_fw_update option to OFF
To Check:         options raid.background_disk_fw_update.enable
To Set:              option raid.background_disk_fw_update.enable off
2. Place the new disk firmware File into the /etc/disk_fw directory

Schedule a time to perform a system reboot. Note that the reboot will take longer as the reboot process will be suspended while the disk firmware is upgraded. Once that is complete the system will continue booting into Data ONTAP.

Option #3 – Upgrade the disk firmware manually
1. Check or set the raid.background_disk_fw_update option to OFF
To Check:         options raid.background_disk_fw_update.enable
To Set:              option raid.background_disk_fw_update.enable off
                2. Place the new disk firmware file into the /etc/disk_fw directory
                3. Issue the disk_fw_update command


The above given are the three procedure for upgrading the disk firmware.


Manually Failover activity in NetApp Metro cluster Environment


Manually Failover activity in NetApp Metro cluster Environment
Today I want to write about the manually performing the takeover and failback activity in netapp metro cluster environment.
In metro cluster environment the takeover activity does not work just by giving the cmd cf takeover cmd.
Takeover process.
We need to manually fail the ISL link.
 We need to give the “cf forcetakeover –d”. cmd.
Giveback process.
aggr status –r   : Validate that you can access the remote storage. If remote shelves don’t show up, check      connectivity
partner   : Go into partner mode on the surviving node.
aggr status –r:  Determine which aggregates are at the surviving site and which aggregates are at the disaster site by entering the command at the left.
Aggregates at the disaster site show plexes that are in a failed state with an out-of-date status. Aggregates at the Surviving site show the plex online.
Note: If aggregates at the disaster site are online, take them offline by entering the following command for each online aggregate: aggr offline disaster_aggr (disaster_aggr is the name of the aggregate at the disaster site).
 Note :( An error message appears if the aggregate is already offline.)

Recreate the mirrored aggregates by entering the following command for each aggregate that was split: “aggr mirror aggr_name -v disaster_aggr” (aggr_name is the aggregate on the surviving site’s node.disaster_aggr is the aggregate on the disaster site’s node. The aggr_name aggregate rejoins the disaster_aggr aggregate to reestablish the MetroCluster configuration. Caution: Make sure that resynchronization is complete on each aggregate before attempting the following step).
Partner (Return to the command prompt of the remote node).


Cf giveback (The node at the disaster site reboots).

Step by step Procedure



Description
To test Disaster Recovery, you must restrict access to the disaster site node to prevent the node from resuming service.  If you do not, you risk the possibility of data corruption.
Procedure
Access to the disaster site note can be restricted in the following ways:
  • Turn off the power to the disaster site node.

    Or
  • Use "manual fencing" (Disconnect VI interconnects and fiber channel cables).

However, both of these solutions require physical access to the disaster site node.  It is not always possible (or practical) for testing purposes.
Proceed with the steps below for "fencing" the fabric MetroCluster without power loss and to test Disaster Recovery without physical access.

Note: Site A is the takeover site. Site B is the disaster site.

Takeover procedure
  1. Stop ISL connections between sites.
  • Connect on both fabric MetroCluster switches on site A and block all ISL ports.  Retrieve the ISL port number.

    SITEA02:admin> switchshow
    switchName:     SITEA02
    switchType:     34.0
    switchState:    Online
    switchMode:     Native
    switchRole:     Principal
    switchDomain:   2
    switchId:       fffc02
    switchWwn:      10:00:00:05:1e:05:ca:b1
    zoning:         OFF
    switchBeacon:   OFF

    Area Port Media Speed State     Proto
    =====================================
      0   0    id    N4   Online    F-Port  21:00:00:1b:32:1f:ff:66
      1   1    id    N4   Online    F-Port  50:0a:09:82:00:01:d7:40
      2   2    id    N4   Online    F-Port  50:0a:09:80:00:01:d7:40
      3   3    id    N4   No_Light
      4   4    id    N4   No_Light
      5   5    id    N2   Online    L-Port  28 public
      6   6    id    N2   Online    L-Port  28 public
      7   7    id    N2   Online    L-Port  28 public
      8   8    id    N4   Online    Online  LE E-Port 10:00:00:05:1e:05:d0:39 "SITEB02" (downstream)
      9   9    id    N4   No_Light
     10  10    id    N4   No_Light
     11  11    id    N4   No_Light
     12  12    id    N4   No_Light
     13  13    id    N2   Online    L-Port  28 public
     14  14    id    N2   Online    L-Port  28 public
     15  15    id    N4   No_Light
  • Check fabric before blocking the ISL port.   
     
SITEA02:admin> fabricshow
Switch ID   Worldwide Name         Enet IP Addr   FC IP Addr  Name
-------------------------------------------------------------------------
1: fffc01 10:00:00:05:1e:05:d0:39  44.55.104.20   0.0.0.0     "SITEB02"
2: fffc02 10:00:00:05:1e:05:ca:b1  44.55.104.10   0.0.0.0     >"SITEA02"
 
The Fabric has 2 switches
  • Disable the ISL port.

    SITEA02:admin> portdisable 8 Check split of the fabric.ss 
  • Check split of the fabric.

    SITEA02:admin> fabricshow
    Switch ID   Worldwide Name      Enet IP Addr    FC IP Addr   Name
    -----------------------------------------------------------------------
    2: fffc02 10:00:00:05:1e:05:ca:b1 44.55.104.10    0.0.0.0    >"SITEA02" 10:00:00:05:1e:05:d2:90           44.55.104.11    0.0.0.0    >"SITEA03"
  • Do the same thing on the second switch.

    SITEA03:admin> switchshow
    switchName:     SITEA03
    switchType:     34.0
    switchState:    Online
    switchMode:     Native
    switchRole:     Principal
    switchDomain:   4
    switchId:       fffc04
    switchWwn:      10:00:00:05:1e:05:d2:90
    zoning:         OFF
    switchBeacon:   OFF

    Area Port Media Speed State     Proto
    =====================================
      0   0   id    N4   Online     F-Port  21:01:00:1b:32:3f:ff:66
      1   1   id    N4   Online     F-Port  50:0a:09:83:00:01:d7:40
      2   2   id    N4   Online     F-Port  50:0a:09:81:00:01:d7:40
      3   3   id    N4   No_Light
      4   4   id    N4   No_Light
      5   5   id    N2   Online     L-Port  28 public
      6   6   id    N2   Online     L-Port  28 public
      7   7   id    N2   Online     L-Port  28 public
      8   8   id    N4   Online     LE E-Port  10:00:00:05:1e:05:d1:c3 "SITEB03" (downstream)
      9   9   id    N4   No_Light
     10  10   id    N4   No_Light
     11  11   id    N4   No_Light
     12  12   id    N4   No_Light
     13  13   id    N2   Online     L-Port  28 public
     14  14   id    N2   Online     L-Port  28 public
     15  15   id    N4   No_Light
SITEA03:admin> fabricshow
Switch ID   Worldwide Name          Enet IP Addr  FC IP Addr Name
-----------------------------------------------------------------------
  3: fffc03 10:00:00:05:1e:05:d1:c3 44.55.104.21  0.0.0.0    "SITEB03"
  4: fffc04 10:00:00:05:1e:05:d2:90 44.55.104.11  0.0.0.0    >"SITEA03"

The Fabric has 2 switches

SITEA03:admin> portdisable 8
SITEA03:admin> fabricshow
Switch ID   Worldwide Name          Enet IP Addr  FC IP Addr Name
-----------------------------------------------------------------------
  4: fffc04 10:00:00:05:1e:05:d2:90 44.55.104.11  0.0.0.0    >"SITEA03"
  • Check the NetApp controller console for disks missing.

    Tue Feb  5 16:21:37 CET [NetAppSiteA: raid.config.spare.disk.missing:info]: Spare Disk SITEB03:6.23 Shelf 1 Bay 7 [NETAPP   X276_FAL9E288F10 NA02] S/N [DH07P7803V7L] is missing. 
  1. Check all aggregates are split.

    NetAppSiteA> aggr status -r
    Aggregate aggr0 (online, raid_dp, mirror degraded) (block checksums)
    Plex /aggr0/plex0 (online, normal, active, pool0)
    RAID group /aggr0/plex0/rg0 (normal)

    RAID Disk Device     HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)   Phys (MB/blks)
    ---------------------------------------------------------------------------------------
    dparity SITEA03:5.16 0b  1     0   FC:B  0  FCAL  10000 272000/557056000 280104/573653840
    parity  SITEA02:5.32 0c  2     0   FC:A  0  FCAL  10000 272000/557056000 280104/573653840
    data    SITEA03:6.16 0d  1     0   FC:B  0  FCAL  10000 272000/557056000 280104/573653840

    Plex /aggr0/plex1 (offline, failed, inactive, pool1)
    RAID group /aggr0/plex1/rg0 (partial)

    RAID Disk Device HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks) Phys (MB/blks)
    ---------------------------------------------------------------------------------
    dparity   FAILED                     N/A            272000/557056000
    parity    FAILED                     N/A            272000/557056000
    data      FAILED                     N/A            272000/557056000
    Raid group is missing 3 disks.
    NetAppSiteB> aggr status -r
    Aggregate aggr0 (online, raid_dp, mirror degraded) (block checksums)
    Plex /aggr0/plex0 (online, normal, active, pool0)
    RAID group /aggr0/plex0/rg0 (normal)

    RAID Disk Device        HA SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)   Phys (MB/blks)
    ------------------------------------------------------------------------------------------
    dparity   SITEB03:13.17 0d   1   1   FC:B  0  FCAL  10000 272000/557056000 280104/573653840
    parity    SITEB03:13.32 0b   2   0   FC:B  0  FCAL  10000 272000/557056000 280104/573653840
    data      SITEB02:14.16 0a   1   0   FC:A  0  FCAL  10000 272000/557056000 280104/573653840

    Plex /aggr0/plex1 (offline, failed, inactive, pool1)
    RAID group /aggr0/plex1/rg0 (partial)

    RAID Disk Device        HA SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
    --------------------------------------------------------------------------------------
    dparity   FAILED                          N/A             72000/557056000
    parity    FAILED                          N/A             72000/557056000
    data      FAILED                          N/A             72000/557056000
    Raid group is missing 3 disks.
  1. Connect on the Remote LAN Management (RLM) console on site B.  Stop and power off the NetApp controller.

    NetAppSiteB> halt
    Boot Loader version 1.2.3
    Copyright (C) 2000,2001,2002,2003 Broadcom Corporation.
    Portions Copyright (C) 2002-2006 NetApp Inc.

    CPU Type: Dual Core AMD Opteron(tm) Processor 265
    LOADER>
  • Power off the NetApp controller.

    LOADER>
    Ctrl-d
    RLM NetAppSiteB> system power off
    This will cause a dirty shutdown of your appliance.  Continue? [y/n]

    RLM NetAppSiteB> system power status
    Power supply 1 status:
       Present: yes
       Turned on by Agent: no
       Output power: no
       Input power: yes
       Fault: no
    Power supply 2 status:
       Present: yes
       Turned on by Agent: no
       Output power: no
       Input power: yes
       Fault: no
       4.   Now you can test Disaster Recovery.
NetAppSiteA> cf forcetakeover -d
----
NetAppSiteA(takeover)>

NetAppSiteA(takeover)> aggr status -v
Aggr State     Status           Options
aggr0 online   raid_dp, aggr    root, diskroot, nosnap=off,
               mirror degraded  raidtype=raid_dp, raidsize=16,
                                ignore_inconsistent=off,
                                snapmirrored=off,
                                resyncsnaptime=60,
                                fs_size_fixed=off,
                                snapshot_autodelete=on,
                                lost_write_protect=on
                Volumes: vol0

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal

                Plex /aggr0/plex1: offline, failed, inactive

NetAppSiteB/NetAppSiteA> aggr status -v
Aggr State      Status            Options
aggr0 online    raid_dp, aggr     root, diskroot, nosnap=off,
                                  raidtype=raid_dp, raidsize=16,
                                  ignore_inconsistent=off,
                                  snapmirrored=off,
                                  resyncsnaptime=60,
                                  fs_size_fixed=off,
                                  snapshot_autodelete=on,
                                  lost_write_protect=on
                Volumes: vol0

                Plex /aggr0/plex1: online, normal, active
                    RAID group /aggr0/plex1/rg0: normal 
Giveback procedure

       5.   After testing Disaster Recovery, unblock all ISL ports.
SITEA03:admin> portenable 8 
  • Wait awhile (Fabric initialization)

    SITEA03: admin> fabricshow
    Switch ID Worldwide Name           Enet IP Addr  FC IP Addr    Name
    -----------------------------------------------------------------------------------------------------------
    3: fffc03 10:00:00:05:1e:05:d1:c3  44.55.104.21  0.0.0.0     SITEB03"
    4: fffc04 10:00:00:05:1e:05:d2:90  44.55.104.11  0.0.0.0     SITEA03"
    The Fabric has 2 switches

    SITEA02:admin> portenable 8
  • Wait awhile (Fabric initialization)

    SITEA02:admin> fabricshow
    Switch ID   Worldwide Name           Enet IP Addr    FC IP Addr      Name
    -------------------------------------------------------------------------
    1: fffc01 10:00:00:05:1e:05:d0:39  44.55.104.20    0.0.0.0      "SITEB02"
    2: fffc02 10:00:00:05:1e:05:ca:b1  44.55.104.10    0.0.0.0     >"SITEA02"
    The Fabric has 2 switches
      6.    Synchronize all aggregates.
NetAppSiteB/NetAppSiteA> aggr status -v
      Aggr State      Status            Options
  aggr0(1) failed     raid_dp, aggr     diskroot, raidtype=raid_dp,
                      out-of-date       raidsize=16, resyncsnaptime=60,
                                        lost_write_protect=off
           Volumes:
                            Plex /aggr0(1)/plex0: offline, normal, out-of-date
           RAID group /aggr0(1)/plex0/rg0: normal
               Plex /aggr0(1)/plex1: offline, failed, out-of-date

          aggr0 online    raid_dp, aggr     root, diskroot, nosnap=off,
                                            raidtype=raid_dp, raidsize=16,
                                            ignore_inconsistent=off,
                                            snapmirrored=off,
                                            resyncsnaptime=60,
                                            fs_size_fixed=off,
                                            snapshot_autodelete=on,
                                            lost_write_protect=on
                Volumes: vol0

                Plex /aggr0/plex1: online, normal, active
                    RAID group /aggr0/plex1/rg0: normal 
  • Launch aggregate mirror for each one.

    NetAppSiteB/NetAppSiteA> aggr mirror aggr0 –v aggr0(1)
  • Wait awhile for all aggregates to synchronize.

    NetAppSiteB/NetAppSiteA: raid.mirror.resync.done:notice]: /aggr0: resynchronization completed in 0:03.36

    NetAppSiteB/NetAppSiteA> aggr mirror aggr0 -v aggr0(1)
        Aggr State     Status           Options
        aggr0 online   raid_dp, aggr    root, diskroot, nosnap=off,
                       mirrored         raidtype=raid_dp, raidsize=16,
                                        ignore_inconsistent=off,
                                        snapmirrored=off,
                                        resyncsnaptime=60,
                                        fs_size_fixed=off,
                                        snapshot_autodelete=on,
                                        lost_write_protect=on
             Volumes: vol0

             Plex /aggr0/plex1: online, normal, active
             RAID group /aggr0/plex1/rg0: normal

             Plex /aggr0/plex3: online, normal, active
             RAID group /aggr0/plex3/rg0: normal
       7.   After re-synchronization is done, power on and boot the NetApp controller on site B.
RLM NetAppSiteB> system power on
RLM NetAppSiteB> system console
Type Ctrl-D to exit.

Boot Loader version 1.2.3
Copyright (C) 2000,2001,2002,2003 Broadcom Corporation.
Portions Copyright (C) 2002-2006 NetApp Inc.

NetApp Release 7.2.3: Sat Oct 20 17:27:02 PDT 2007
Copyright (c) 1992-2007 NetApp, Inc.
Starting boot on Tue Feb  5 15:37:40 GMT 2008
Tue Feb  5 15:38:31 GMT [ses.giveback.wait:info]: Enclosure Services will be unavailable while waiting for giveback.
Press Ctrl-C for Maintenance menu to release disks.
Waiting for giveback
  1. On site A, execute cf giveback        

    NetAppSiteA(takeover)> cf status
    NetAppSiteA has taken over NetAppSiteB.
    NetAppSiteB is ready for giveback.

    NetAppSiteA(takeover)> cf giveback
    please make sure you have rejoined your aggr before giveback.
    Do you wish to continue [y/n] ?? y

    NetAppSiteA> cf status
    Tue Feb  5 16:41:00 CET [NetAppSiteA: monitor.globalStatus.ok:info]: The system's global status is normal.
    Cluster enabled, NetAppSiteB is up.