NetApp Snap Mirror
Well
every NetApp engineer will be aware of the snapmirror , it’s a common and
important feature of the NetApp, so today I thought of writing something about
snapmirror , May be my blog on snapmirror can help you to understand the snapmirror
more nicely.
Why
we need a snapmirror.
SnapMirror
is replication feature of NetApp and it is fast and flexible enterprise
solution to replicate your critical and very precious data over local area,
wide area and fiber channel networks to the destination/different location, and
it is the very good solution for the disaster and even good solution for the online
data migration without any additional overhead.
Snapmirror
have three modes.
Async: Replicates snapshot copies from a
source volume or qtree to a destination volume or qtree. Incremental updates
are based on schedules or are performed manually using the snapmirror update
command. It works both in volume level and qtree level.
Sync: Replicates writes from a source volume
to a secondary volume at the same time it is written to the source volume. Snap
mirror Sync is used in environments that have zero tolerance for data loss.
Semi-sync: It is between the Async and sync mode
with less impact on performance. It can configure a snapMirror sync replication
to lag behind the source volume by a user-defined number of write operations or
milliseconds.
Volume
snapmirror enables block-for –block replication. The entire volume, including
its qtrees, and all the associated snapshot copies, are replicated to the
destination volume. The source volume is online/writable and the destination
volume is online/readonly and when the relationship is break the destination
volume becomes writable.
Initial
Transfer and Replication.
To
initialize a snapmirror relation, you first have to restrict the destination
volume in which the replica will reside. During the baseline transfer, the
source system takes a snapshot copy of the volume. All data blocks referenced
by this snapshot copy, including volume metadata such as language translation
settings, as well as all snapshot copies of the volume are transferred and
written to the destination volume.
After
the initialization completes, the source and destination file systems have one
snapshot copy in common. Update occur from this point and are based on the
schedule specified in a flat-text configuration file known as the
snapmirror.conf file or by using the snapmirror update command.
To
identify new and changed blocks, the block map in the new snapshot copy is
compared to the block map of the baseline snapshot copy. Only the blocks that
are new or have changed since the last successful replication are sent to the
destination. Once the transfer has completed the new shapshot copy becomes the
baseline snapshot copy and the old one is deleted.
Requirements
and Limitations
Destinations
Data Ontap version must be equal to or more recent than the source. In
addition, the source and the destination must be on the same Data ontap
release.
Volume
snapMirror replication can only occur with volumes of the same type either
traditional volumes or both flexible volumes.
Destination
volumes capacity equal to or greater than size of the source, Administrators
can thin provision the destination so that it appears to be equal to or greater
than the size of the source volume.
Quota
cannot be enabled on destination volume.
It
is recommended that you allow a range of TCP ports from 10565 to 10569.
Qtree
SnapMirror
Qtree
snapMirror is a logical replication. All the files and directories in the
source file system are created in the target destination qtree.
Qtree
Snap Mirror replication occurs between qtrees regardless of the type of the
volume (traditional or flexible).Even qtree replication can occur between
different releases of Data ONTAP.
Source
volume and qtree are online/writable in qtree replication and Destination
volume is also online/writable (in qtree replication).
NOTE: Unlike volume snapMirror , a qtree
snapMirror does not require that the size of the destination volume be equal to
or greater than the size of the source qtree.
In
initial baseline transfer you not need to create the destination qtree , it
gets automatically created upon first time replication.
Requirements
and limitations
Support
Async mode only
Destination
volume must contain 5% more free space than the source qtree and destination
qtree cannot be /etc
Qtree
snapMirror performance is impacted by deep directory structure and large number
(tens of millions) of small files replicated.
Configuration
process of snapmirror
1. Install the snapMirror license
For ex: license add <code>
2. On the source, specify the host name or
IP address of the snapMirror destination systems you wish to authorize to
replicate this source system.
For Ex: options snapmirror.access host=dst_hostname1,dst_hostname2
3. For each source volume and qtree to
replicate, perform an initial baseline transfer, For volume snapmirror restrict
the destination volume.
For Ex: vol restrict dst_volumename
Then initialize the volume snapmirror
baseline, using the following syntax on the destination:
For Ex: snapmirror initialize –S
src_hostname:src_vol dst_hostname:dst_vol
For
a qtree snapmirror baseline transfer, use the following syntax on the destination
Snapmirror initialize –S src_hostname:
/vol/src_vol/src_qtree dst_hostname:/vol/dst_vol/dst_qtree
4. Once the initial transfer completes,
set the snapmirror mode of replication by creating the /etc/snapmirror.conf
file in the destination’s root volume.
Snapmirror.conf
The
snapmirror.conf configuration file entries define the relationship between the
source and the destination, the mode of replication, and the arguments that
control SnapMirror when replicating data.
Entries
can be seen like this in snapmirror.conf file
For
ex: Fas1:vol1 Fas2:vol1 – 0 23 * 1,3,5
Fas1:vol1 : source storage system hostname and
path
Fas2:vol1: destination storage system hostname
and path
“-“: Arguments: Arguments fields let you
define the transfer speed and restart mode and “– “ indicate the default mode is selected
Schedules
0: updates on the hours
23: updates on 11PM
*: Updates on all applicable days of the
months
1,3,5: updates on Monday,Wednesday,Friday
You
can Monitor transfer by running the cmd “snapmirror status” this cmd can be run on source as well as on
the destination also, it comes with two options –l and –q
-l
: option display the long format of the output.
-q:
option displays which volumes or qtree are quiesced or quiescing.
You
can list all the snap shot copies of particular volumes by “snap list
volumename” cmd, snapmirror snapshot copies are distinguished from system
snapshot copies by a more elaborate naming convention.
The
snap list command display the keyword snapmirror next to the necessary snapshot
copy
Log
files
Snapmirror
logs record whether the transfer finished successfully or failed. If there is a
problem with the updates , it is useful to look at the log file to see what has
happened since the last successful update. The log include the start and end of
each transfer, along with the amount of data transferred.
For
ex: options snapnmirror.log.enable (on/off) by default it is on.
Log
files are stored in the source and the destination storage system root volume,
in the /etc/logs/snapmirror directory.
This
guides you quickly through the Snapmirror setup and commands.
1) Enable Snapmirror on source and destination filer
source-filer> options snapmirror.enable
snapmirror.enable on
source-filer>
source-filer> options snapmirror.access
snapmirror.access legacy
source-filer>
2) Snapmirror Access
Make sure destination filer has snapmirror access to the source filer. The snapmirror filer's name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to /etc/snapmirror.allow.
source-filer> rdfile /etc/snapmirror.allow
destination-filer
destination-filer2
source-filer>
3) Initializing a Snapmirror relation
Volume snapmirror : Create a destination volume on destination netapp filer, of same size as source volume or greater size. For volume snapmirror, the destination volume should be in restricted mode. For example, let us consider we are snapmirroring a 100G volume - we create the destination volume and make it restricted.
destination-filer> vol create demo_destination aggr01 100G
destination-filer> vol restrict demo_destination
1) Enable Snapmirror on source and destination filer
source-filer> options snapmirror.enable
snapmirror.enable on
source-filer>
source-filer> options snapmirror.access
snapmirror.access legacy
source-filer>
2) Snapmirror Access
Make sure destination filer has snapmirror access to the source filer. The snapmirror filer's name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to /etc/snapmirror.allow.
source-filer> rdfile /etc/snapmirror.allow
destination-filer
destination-filer2
source-filer>
3) Initializing a Snapmirror relation
Volume snapmirror : Create a destination volume on destination netapp filer, of same size as source volume or greater size. For volume snapmirror, the destination volume should be in restricted mode. For example, let us consider we are snapmirroring a 100G volume - we create the destination volume and make it restricted.
destination-filer> vol create demo_destination aggr01 100G
destination-filer> vol restrict demo_destination
Volume SnapMirror creates a Snapshot
copy before performing the initial transfer. This copy is referred to as the
baseline Snapshot copy. After performing an initial transfer of all data in the
volume, VSM (Volume SnapMirror) sends to the destination only the blocks that have
changed since the last successful replication. When SnapMirror performs an
update transfer, it creates another new Snapshot copy and compares the changed
blocks. These changed blocks are sent as part of the update transfer.
Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on destination filer. The below command starts the baseline transfer.
destination-filer> snapmirror initialize -S source-filer:demo_source destination-filer:demo_destination
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
destination-filer>
Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The snapmirror command automatically creates the destination qtree. So just volume creation of required size is good enough.
Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on destination filer. The below command starts the baseline transfer.
destination-filer> snapmirror initialize -S source-filer:demo_source destination-filer:demo_destination
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
destination-filer>
Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The snapmirror command automatically creates the destination qtree. So just volume creation of required size is good enough.
Qtree SnapMirror determines changed
data by first looking through the inode file for inodes that have changed and
changed inodes of the interesting qtree for changed data blocks. The SnapMirror
software then transfers only the new or changed data blocks from this Snapshot
copy that is associated with the designated qtree. On the destination volume, a
new Snapshot copy is then created that contains a complete point-in-time copy
of the entire destination volume, but that is associated specifically with the
particular qtree that has been replicated.
destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
4) Monitoring the status : Snapmirror data transfer status can be monitored either from source or destination filer. Use "snapmirror status" to check the status.
destination-filer> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
source-filer:demo_source destination-filer:demo_destination Uninitialized - Transferring (1690 MB done)
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree Uninitialized - Transferring (32 MB done)
destination-filer>
5) Snapmirror schedule : This is the schedule used by the destination filer for updating the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The schedule field can either contain the word sync to specify synchronous mirroring or a cron-style specification of when to update the mirror. The cronstyle schedule contains four space-separated fields.
destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
4) Monitoring the status : Snapmirror data transfer status can be monitored either from source or destination filer. Use "snapmirror status" to check the status.
destination-filer> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
source-filer:demo_source destination-filer:demo_destination Uninitialized - Transferring (1690 MB done)
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree Uninitialized - Transferring (32 MB done)
destination-filer>
5) Snapmirror schedule : This is the schedule used by the destination filer for updating the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The schedule field can either contain the word sync to specify synchronous mirroring or a cron-style specification of when to update the mirror. The cronstyle schedule contains four space-separated fields.
If you want to sync
the data on a scheduled frequency, you can set that in destination filer's
/etc/snapmirror.conf . The time settings are similar to Unix cron. You can set
a synchronous snapmirror schedule in /etc/snapmirror.conf by adding “sync”
instead of the cron style frequency.
destination-filer> rdfile /etc/snapmirror.conf
source-filer:demo_source destination-filer:demo_destination - 0 * * * # This syncs every hour
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree - 0 21 * * # This syncs every 9:00 pm
destination-filer>
6) Other Snapmirror commands
- To break snapmirror relation - do snapmirror quiesce and snapmirror break.
- To update snapmirror data - do snapmirror update
- To resync a broken relation - do snapmirror resync.
- To abort a relation - do snapmirror abort
Snapmirror do provide
multipath support. More than one physical path between a source and a
destination system might be desired for a mirror relationship. Multipath
support allows SnapMirror traffic to be load balanced between these paths and
provides for failover in the event of a network outage.
Some
Important Points to be known about SnapMirror
Clustered
failover interaction.The
SnapMirror product complements NetApp clustered failover (CF) technology by
providing an additional level of recoverability. If a catastrophe disables access
to a clustered pair of storage systems, one or more SnapMirror volumes can
immediately be accessed in read-only mode while recovery takes place. If
read-write access is required, the mirrored volume can be converted to a writable
volume while the recovery takes place. If SnapMirror is actively updating data
when a takeover or giveback operation is instigated, the update aborts.
Following completion of the takeover or giveback operation, SnapMirror
continues as before. No specific additional steps are required for the
implementation of SnapMirror in a clustered failover environment
Adding disks to SnapMirror environments.When adding disks to volumes in a
SnapMirror environment always complete the addition of disks to the destination
storage system or volume before attempting to add disks to the source volume.
Note: The dfcommand does not
immediately reflect the diskor disks added to the SnapMirror volume until after
the first SnapMirror update following the disk additions.
Logging.
The SnapMirror log file (located in
/etc/logs/snapmirror.log) records the start and end
of an update as well as other significant
SnapMirror events. It records whether the transfer finished successfully or
whether it failed for some reason. If there is a problem with updates, it is
often useful to look at the log file to see what happened since the last
successful update. Because the log file is kept on the source and destination
storage systems,quite often the source or the destination system may log the
failure, and the other partner knows only that there was a failure. For this
reason, you should look at both the source and the destination log file to get
the most information about a failure. The log file contains the start and end time
of each transfer, along with the amount of data transferred. It can be useful
to look back and see the amount of data needed to make the update and the
amount of time the updates take.
Note: The time vs. data sent is not an
accurate measure of the network bandwidth because the transfer is not
constantly sending data
Destination
volume.For
SnapMirror volume replication, you must create a restricted volume to be used
as the destination volume. SnapMirror does not automatically create a volume.
Destination
volume type.The
mirrored volume must not be the root volume.
Data
change rate.Using
the ‘snap delta’ command, you can
now display the rate of change stored between two Snapshot copies as well as
the rate of change between a Snapshot copy and the active file system. Data
ONTAP displays the rates of change in two tables. The first table displays
rates of change between successive Snapshot copies. The second table displays a
summary of the rate of change between the oldest Snapshot copy and the active
file system.
Failed
updates. If
a transfer fails for any reason, SnapMirror attempts a retransfer immediately,
not waiting for the next scheduled mirror time. These retransfer attempts
continue until they are successful, until the appropriate entry in the
/etc/snapmirror.conf file is commented out, or until SnapMirror is turned off.
Some events that can cause failed transfers include:
Loss
of network connectivity
Source
storage system is unavailable
Source
volume is offline
SnapMirror
timeouts. There
are three situations that can cause a SnapMirror timeout:
Write socket timeout. If the TCP
buffers are full and the writing application cannot hand off data to
TCP within 10 minutes, a write socket
timeout occurs. Following the timeout, SnapMirror resumes
at the next scheduled update.
Read
socket timeout.
If the TCP socket that is receiving data has not received any data from the application
within 30 minutes, it generates a timeout. Following the timeout, SnapMirror
resumes at the next scheduled update. By providing a larger timeout value for
the read socket timeout, you can be assured
that SnapMirror will not time out while
waiting for the source file to create Snapshot copies, even when dealing with
extremely large volumes. Socket timeout values are not tunable in the Data
ONTAP and SnapMirror environment.
Sync
timeouts. These
timeouts occur in synchronous deployments only. If an event occurs that causes
a synchronous deployment to revert to asynchronous mode, such as a network
outage, no ACK is received from the destination system.
Open
Files
If SnapMirror is in the middle of a
transfer and encounters an incomplete file (a file that an FTP server is still transferring
into that volume or qtree), it transfers the partial file to the destination.
Snapshot copies behave in the same way. A Snapshot copy of the source would
show the transferring file and would show the partial file on the destination.
A workaround for this situation is to
copy a file to the source. When the file is complete on the source, rename the
source file to the correct name. This way the partial file has an incorrect
name, and the complete file has the correct name.
Very good language use keep it up
ReplyDeleteawsome article dude...
ReplyDeleteI've learned a lot with you about this network appliance Snap Mirror. If ever I have troubles and problems about my NetApp, is it ok to ask with you? :)
ReplyDelete