Monday, 23 January 2012

Tech refresh From Fas2020 To Fas2040

Hello Friend today I want to write about the project   which I got of NetApp tech refresh from Fas2020 to Fas 2040
With one disk shelf DS4243 and the mistake which I did and the things which I learned by those mistake
I want to share it. So you all can learn from those mistakes.

The Task: I need to replace the Fas2020 fully populated with its internal disk, with Fas2040 dual controller and then add one new diskshelf DS4243 half populated. And I want to move all the internal disk from Fas2020 (single controller) to the Fas 2040 (dual controller) , and then I need to move all the volumes from one controller to the other and there was three volumes with luns which I need to remap.
There was one Aggr named Aggr0 with 4 volmes and in which 3 volumes were having luns, and as I told that all the volumes were 97% filled. Why I am mentioning about volumes 97% filled because I really faced lot of problem while migrating it from one filer to another.
I was told to give  all the fas 2020 internal disk to fas2040 one controller and the new disk shelf disk to the other controller.
I hope you all understood the the task which I got , (friend my English is not so good so I am really sorry if any of you are not able to understand what I wrote).
Solution: I will explain each thing Phase by Phase .
Phase 1: First I Thought of upgrading the Fas2020 of equivalent version of Fas2040.
So I upgraded the ontap version from Fas7.3.3 to 7.3.6
After upgrading I checked every thing was fine and then I halted the system.
Phase 2: unplugged the wire connection of Fas2020 and then un mounted the Fas2020 and then mounted the new Fas2040 and then mounted the Disk shelf DS4243.
And then removed all the disk from Fas2020 and inserted it to Fas2040.
Then cabled Fas2040 and the disk shelf , first power  the diskshelf and then set it shelf id and rebooted it.
Then I booted the Fas2040 in maintaince mode and assigned all the internal disk of Fas2020 to one of the controller of Fas2040.
I hope you know how to assign the disk to new controller. Well I will write the cmd also.
Disk reassign –S old system id –d new system id
So now the all the old fas2020 disk got assigned to the new Fas2040 .
And then halted the system to return from the maintaince mode.
And then again booted the system and all the configuration and every thing came as it is to the new controller and all the volumes and aggr was detected by the new system the host name also came as it is.
Then we attached the new disk shelf to the Fas2040 and then assigned the all the new disk to the other controller.
And configured the new controller as we do the initial configuration. I hope you all know how to do the initial configuration. I don’t want to mention it here as I will be writing the new blog to how do the initial configuration of the new filer and will mention each and every option.

So coming back , as I did the initial configuration of the new filer , installed the licenses .

Phase 3: Data migration Phase.
Created all the 4 volumes of same size to the new filer
Created the snapmirror relation ship between two filer because we were doing the data migration by snapmirror.
But the snap mirror was not working , it was giving the error of “ The snapmirror is misconfigured or the source volume may be busy”
We were not able to understand this error,
So what we did is we changed the “options snapmirror.access  legacy”  to the  “options snapmirror.access  host=ipaddress of the source in destination filer and ipaddress of destination in source filer”.
And then after doing this we did again the snapmirror and then the snapmirror of one volume happened which was of 2 gb .( it got finished within no time.)
Then we tried to do the snapmirror of 2nd volume which was of 343 gb, and again we got the same error which we was getting before.
Here we struggled lot to figure out the problem, but we found later and resolved it. As I was mentioning before that volume were 97% filled, and there was around no space leftout in aggr  also. So because of that that volume was no able to take out its initial snapshot, because of no space,( may be that is the problem ).
So what we did we executed one cmd for that volume“vol options /vol/vol_name fractional_reserve 0
And after running that cmd for that volume we again tried to manaually take the snapshot of that volume and that snapshot got created.
So again we created the snapmirror relationship and this time the transferring started happening.
Same thing we did for the other volume and even the snapmirror started for the 3rd volume and so all the 4 volume got transferd from one filer to the other.
Phase 4: mapping the new transferd volumes and lun.
As I told you that three lun was there in three volumes so we need to map those lun back.
And these lun were given to the Vmware so these lun were the datastore in the vmware.
We broke the snapmirror  relationship and then made the source volume offline and then made the destination volume online .
Created the igroup and mapped the Igroup .
But when we tried to search that disk from vmware side we were not able to find those disk. ( again new problem arose)
Then we checked the FCP connection by cmd “fcadmin config” we showed the both the port were down so we checked the FC cabling were done properly or not, so we tried to manually bring the port up.
it gave the error start the fcp services so we tried to start the fcp services but the services was not starting so what we did we checked the license of both the filer they were different so we made it to same license on both the filer.(cluster was enabled)
So again we tried to start the services and then we got the error that fcp is misconfigured and fcp cf mode is misconfigured in its partner.
Then we checked “fcp show cfmode
and found that one filer is in “single_image” and one is in “standby
so we went to the advanced mode and changed the setting by typing the
 cmd “fcp set cfmode Single_image” on the filer where it was set to “standby” mode.
And then by setting it we again run the fcp services it got started and then the port came online and then these lun were visible to vmware.
Thanks the problem got resolved.
And project got completed successfully,




1 comment:

  1. Hey Dude,

    Please continue you posting .I am a fresher to storage field .Could learn lot of things from your blog .Really helpful for me to overcome my difficulties in working enviroment .Can you please share your fb ID or email Id ,will keep in for any doubt

    Thanks and Regards
    Dinesh.J

    ReplyDelete