Thursday 25 October 2012

NetApp "stats" command

NetApp "stats" command:

Step 1:
List the available measureable objects on the filer.
filer> stats list objects
Objects:
        cpx
        rquota
        aggregate
        audit_ng
        cifs
        disk
        dump
        ext_cache_obj
        ext_cache
        fcp
        hostadapter
        ifnet
        iscsi_conn
        iscsi_lif
        iscsi
        logical_replication_destination
        logical_replication_source
        lun
        ndmp
        nfsv3
        processor
        qtree
        quota
        raid
        spinhi
        system
        target
        vfiler
        volume
        wafl
        avoa

Step 2:
Find out list of counters available for the objects listed in step1
stats list counters
filer> stats list counters
Counters for object name: cifs
instance_name
node_name
cifs_ops
cifs_latency
cifs_read_ops
cifs_write_ops
Counters for object name: disk
instance_name
node_name
instance_uuid
display_name
raid_name
raid_group
raid_type
disk_speed

To list counters for specific object ?

filer>stats list counters volume
Counters for object name: volume
        instance_name
        node_name
        instance_uuid
        vserver_name
        vserver_uuid
        avg_latency
        total_ops
        read_data
        read_latency
        read_ops
        write_data
        write_latency
        write_ops
        other_latency
        other_ops


Step 3:
Find out the specific instances available for each objects
DARFAS01> stats list instances
Instances for object name: cpx
        total
Instances for object name: rquota
        rquota_cpu0
        rquota_cpu1
        rquota_total
Instances for object name: aggregate
        aggr0
Instances for object name: audit_ng
Instances for object name: cifs
        cifs
Note: Basically what instances means is that for each object (example-volume), there are specific instances available for you to check the counters for.

Step 4:
For example: To find out the instances for the object volume.
DARFAS01> stats list instances volume
Instances for object name: volume
        vol0
        vfiler1_darfp01_root
        vfiler1_darfp01_nas
        vfiler2_darfp02_root
        vfiler2_darfp02_nas
        EUEV01_FAS_ISCSI_SATA11
As we can see on my test fieler, I have these instances availables. In other words, under object VOLUME, I can check counters against the volumes availables on my test filer.

Step 5:
Lets measure the counter againt the objects for specific instance
OBJECT:INSTANCE:COUNTER

Lets apply this format into command. Before that I want to know which counters are available for my chosen object i.e VOLUME
DARFAS01> stats list counters volume
Counters for object name: volume
        instance_name
        node_name
        instance_uuid
        vserver_name
        vserver_uuid
        avg_latency
        total_ops
        read_data
        read_latency
        read_ops
        write_data
        write_latency
        write_ops
        other_latency
        other_ops


Step 6:
Start system statistics gathering in the background, using identifier "MyStats", display the values while gathering continues, then stop gathering and display final values:

filer> stats start -I MyStats volume

To see the results while DISK I/O copy is in progress.

filer> stats show -I MyStats (Note:This will display results for all the volumes)

To seee the results for specific volume, say 'vol_cifs_vfiler'

filer> stats start -I MyStats volume:vol_cifs_vfiler

Then stop gathering and display final values:

filer> stats show -I MyStats
StatisticsID: MyStats
volume:vol_cifs_vfiler:instance_name:vol_cifs_vfiler
volume:vol_cifs_vfiler:node_name:
volume:vol_cifs_vfiler:instance_uuid:2ba581c0-1ea0-11e2-9c8f-123478563412
volume:vol_cifs_vfiler:vserver_name:
volume:vol_cifs_vfiler:vserver_uuid:
volume:vol_cifs_vfiler:avg_latency:17.67us
volume:vol_cifs_vfiler:total_ops:0/s
volume:vol_cifs_vfiler:read_data:0b/s
volume:vol_cifs_vfiler:read_latency:0us
volume:vol_cifs_vfiler:read_ops:0/s
volume:vol_cifs_vfiler:write_data:0b/s
volume:vol_cifs_vfiler:write_latency:0us
volume:vol_cifs_vfiler:write_ops:0/s
volume:vol_cifs_vfiler:other_latency:17.67us
volume:vol_cifs_vfiler:other_ops:0/s


 

Tuesday 18 September 2012

SnapMirror-2-Tape for LOW bandwidth

SnapMirror-2-Tape for LOW bandwidth

SMTape is a high performance disaster recovery solution from Data ONTAP that backs up blocks of data to tape. It is Snapshot copy-based backup to tape feature. This feature is available only in the Data ONTAP 8.0 7-Mode or later releases.

You can use SMTape to perform volume backups to tapes. However, you cannot perform a backup at the qtree or subtree level. Also, you can perform only a level-0 backup and not incremental backups.
When you perform an SMTape backup, you can EITHER specify the name of the Snapshot copy to be backed up to tape. When you specify a Snapshot copy for the backup, all the Snapshot copies older than the specified Snapshot copy are also backed up to tape. If do not specify any snapshot, smtape creates a base snapshot copy to be used later for tape seeding.

What tape seeding is ?

Tape seeding is an SMTape functionality that helps you intialize the destination storage system in a volume SnapMirror relationship.


Consider a scenario in which you want to establish a SnapMirror relationship between a source system and a destination system over a low-bandwidth connection. Incremental mirroring of Snapshot copies from the source to the destination is feasible over a lowband width connection. However, an initial mirroring of the base Snapshot copy would take a long time over a low-bandwidth connection. In such a case, you can perform an SMTape backup of the source volume to a tape and use the tape to transfer the initial base Snapshot copy to the destination. You can then set up incremental SnapMirror updates to the destination system using the low-bandwidth connection.


Test it out on - Ontap SIMULATOR  

 
Simulator provides simulated tape devices which can be used to simulate smtape feature.
 
filerA>sysconfig -t
or
filerA>storage stats tape
 
This will list the test tape drives bundled with ontap.
 
You can perform an SMTape backup and restore using NDMP-compliant backup applications or using the Data ONTAP 8.0 7-Mode smtape backup and smtape restore CLI commands.
 
Steps:
1. snapmirror 2 tape:






 

2. Restore to volume (seeding back > destination volume)


te



3. Resume incremental snapmirror update to destination volume






Note: In this demo, I have used both snapmirror souce & destination volumes on the same filer but the logic remains same.