Sunday 9 April 2017

7-mode migration to Cluster ontap

When transitioning from 7-mode CIFS/NFS volumes to Cluster Ontap there are 4 main steps:


1. Create destination DP volume on the SVM [Cluster]
2. vserver peer transition create
3. Snapmirror create
4. snapmirror initialize

Please note: Before you do this, make sure snapmirror is licensed on both Cluster and the 7-mode side. You can copy data from 7-Mode volumes to clustered Data ONTAP volumes by using clustered Data ONTAP SnapMirror commands. After the data is copied across, you can then set up the protocols, services, and other configuration to resume services on the cluster side.

Attention: You can transition only volumes in a NAS (CIFS and NFS) environment to clustered Data ONTAP.

This article is not about how to perform those 4 steps, but about error you might receive in the 4th step during initializing.

Issue: Snapmirror between 7-mode and cluster fails to initialize.
ONTAP9::> snapmirror update -destination-path CIFS_FG:CIFS_SM
Error: command failed: Volume CIFS_FG:CIFS_SM is not initialized.

Even though you did initialized it for sure:
ONTAP9::> snapmirror initialize -destination-path CIFS_FG:CIFS_SM
Operation is queued: snapmirror initialize of destination "CIFS_FG:CIFS_SM".

When you look at the snapmirror show, it reports Uninitialized
ONTAP9::> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
ontap7.lab.com:vol_CIFS
            TDP  CIFS_FG:CIFS_SM
                              Uninitialized
                                      Idle           -         false   -

Cause:
Check the 7-mode console for errors, and you will spot the error.
Sun Apr  9 22:03:18 GMT [ontap832:snapmirror.src.requestDenied:error]: SnapMirror transfer request from vol_CIFS to host CIFS_FG at IP address 192.168.0.60 denied: check options snapmirror.access.

Culprit was : denied: check options snapmirror.access. 192.168.0.60 is the LIF on the SVM which is used to pull the data from the 7-mode.

Solution: Add the SVM to the snapmirror access list on 7-mode.

What this option does?
options snapmirror.access = Specifies the SnapMirror destinations [SVM] that are allowed to copy from the system. So we need to add our SVM to this option.


ontap7> options snapmirror.access
snapmirror.access            legacy
It is set to legacy - When the option is set to legacy, access is controlled by the /etc/snapmirror.allow file. Therefore, simply remove the legacy.


For test purpose I am allowing everything, so I use '*'

Step 1:
ontap7> options snapmirror
snapmirror.access            *

Step 2:
Run the initialize command again.
ONTAP9::> snapmirror initialize -destination-path CIFS_FG:CIFS_SM
Operation is queued: snapmirror initialize of destination "CIFS_FG:CIFS_SM".

Step 3:
Check the progress
ONTAP9::> snapmirror show -destination-path CIFS_FG:CIFS_SM
                     Source Path: ontap832.lab.com:vol_CIFS
                     Destination Path: CIFS_FG:CIFS_SM
                     Relationship Type: TDP
                     Relationship Group Type: none
                     SnapMirror Schedule: -
                     SnapMirror Policy Type: async-mirror
                    SnapMirror Policy: DPDefault
                    Tries Limit: -
                    Throttle (KB/sec): unlimited
                    Mirror State: Snapmirrored
                    Relationship Status: Idle
                    File Restore File Count: -
                    File Restore File List: -
                    Transfer Snapshot: -
                    Snapshot Progress: -
                   Total Progress: -
                    Network Compression Ratio: -
                    Snapshot Checkpoint: -
                    Newest Snapshot: CIFS_FG(4082368507)_CIFS_SM.1
                    Newest Snapshot Timestamp: 04/09 23:14:25
                    Exported Snapshot: CIFS_FG(4082368507)_CIFS_SM.1
                    Exported Snapshot Timestamp: 04/09 23:14:25
                    Healthy: true
                    Unhealthy Reason: -
                    Constituent Relationship: false
                    Destination Volume Node: ONTAP9-01
                    Relationship ID: f32beda3-1d6f-11e7-8a7b-000c29f1b85e
                   Current Operation ID: -
                   Transfer Type: -
                   Transfer Error: -
                   Current Throttle: -
                   Current Transfer Priority: -
                   Last Transfer Type: initialize
                   Last Transfer Error: -
                   Last Transfer Size: 53.91MB

As seen below - The data from the 7-mode FILER's CIFS volume has been transferred to Cluster Ontap.



Friday 7 April 2017

Direct attached NDMP and Remote NDMP Backup

        NDMP Direct attached Backup also known as '2-way backup'

              


              Remote NDMP Backup also known as '3-way backup'

              

               Please Note : For Remote NDMP, you can also use 'Disk Library'

                      Request : If you do copy these images, please give the credit.

Wednesday 5 April 2017

NDMP and DAR

Direct access restore [DAR] is the ability to restore specific files without having to go through the ENTIRE TAPE. DAR provides the EXACT offset at restore time to the backup application. With this information, the backup application is able to JUMP directly to the DESIRED OFFSET and recover the file without having to read the ENTIRE backup IMAGE sequentially.

DAR requires:

1. NDMP version 3 or later.
2. File History [HIST environment variable) to be enabled.

Note: "File History" is the term used for an INDEX of files that has been backed up. An NDMP client [Backup Application] may request FILE HISTORY from the NDMP Data Service at the time of initiating the backup. If the file history is requested, the backup engine sends information like -File name and path, file status information and file positioning information (i.e. the address of the file in the backup data stream).

It is important to note that the "File History" and "DAR" are not the same, and this is b'cos you can choose to restore a file without the 'File Positioning Information', and this will be called Non-DAR restore, in other words, you have an ability to browse the folders/files and then select the specific file(s) to restore, but a large portion of the data from the backup that included the file must be read, could well be the entire tape.

Whereas, in DAR : Only the portion of the TAPE which contains the data to be restored is read.
Hence, they both are two different things.

This further means that the offset map [DAR] file generation can result in a significant impact to performance if a large number of small files are backed up. In such a case, you may want to disable the offset map generation WITHOUT disabling the FILE HISTORY and there by increasing the performance and yet have the ability to browse folders/files.

So, Keep FILE HIST=T, and disable DAR [offset_map]

To do so, on the NetApp console, use the following options command.

7-mode: [Disable offset_map]
7-mode> options ndmpd.offset_map.enable off

ONTAP [cmode] -  [Disable offset_map]
::>vserver services ndmp modify -vserver vservername -offset-map-enable  [true|false]

Directory DAR:
Is the ability to provide a directory name to the backup application such that the backup software EXPANDS the directory contents and recovers each FILE using the DAR process.

To be able to implement Directory DAR, ONTAP needs to record the offset of each file on the backup image. Thus, on restore time the restore application can EXPAND the directory to be recovered, and then LOAD the information about the offsets, and do DAR for all files underneath the specified DIRECTORY.

Enhanced DAR: Is nothing but combination of FILE HISTORY & OFFSET map both enabled .Thus to benefit from the Enhanced DAR feature, HIST & OFFSET must be enabled.

Tuesday 4 April 2017

Puppet password reset

Puppet password reset


Problem : Forgot puppet password?


Ans: Follow the steps
1. Create a new user with ROLE:Admin to get the foot in the door.
2. Then go to admin tools and reset the password for peadmin or root.

Step1:
[root@redhatcentos /]# cd /opt/puppet/share/puppet-dashboard
[root@redhatcentos puppet-dashboard]# /opt/puppet/bin/bundle exec /opt/puppet/bin/rake -f /opt/puppet/share/console-auth/Rakefile db:create_user USERNAME="test" PASSWORD="whateveryoulike" ROLE="Admin" --trace



Step2:

Once logged in, go to admin tools and reset the password.




Monday 3 April 2017

NDMP CAB extension and how does it work really ?

What do you mean by NDMP CAB extension, how does it work really ?

CAB stands for Cluster Aware Backup, and the extension are nothing but a soft
ware feature that allows mutual communication between a Backup Software [DMA] and the NDMP server to make a rational decisions with respect to the locality of the Hosted Volumes and the Tape Drives.

How does a Backup software such as CommVault implements CAB?

DMA Informs = A DMA implementing CAB extensions - Will notify the NDMP Server about the VOLUMEs to be backed up even before a data connection has been established. This happens during NDMP control connection establishment.

NDMP Server responds = This will enable NDMP to identify the Node on which the VOLUME is hosted, so that when the DMA does establish the data connection, NDMP Server will ensure that is established from the appropriate Node hosting the VOLUME.

CAB term is applicable to NDMP SVM Scope Mode only. NetApp Cluster Ontap has two modes of operations:

1. Node-scope mode
2. SVM-scope  mode [CAB]

In Node-scope mode = It is flat and simple, you can only backup VOLUMEs hosted on the particular NODE, you cannot see other VOLUMES that are sitting on other NODEs in the Cluster.

In SVM-scope mode = The entire Cluster is available for the Backup, in other words, all the VOLUMEs are exposed irrespective of where they are Hosted.

How does CAB extensions in SVM-scope mode helps in exposing the VOLUMEs and the TAPE drives to DMA?

As mentioned above, in the case of SVM scope NDMP, b'cos the VOLUMEs and TAPEs discovered can be hosted on different NODEs, the CAB extension exposes a 'UID' for each VOLUMEs and TAPEs and this what is called 'AFFINITY' information. If the 'AFFINITY' value of a VOLUME and TAPE match, it can be inferred that the specific VOLUME and TAPE are hosted on the same NDOE. This allows a DMA to DRIVE a LOCAL backup.

Situation can get little murky here - This is b'cos in  the SVM-scope mode, there are multiple LIF types available, and depending upon which LIF type you are connecting to during NDMP control connection, you get different VISIBILITY scope.

The visibility of VOLUMEs and TAPEs are determined by the SVM context in which the NDMP control connection is made, basically and the LIF type on which the NDMP control session is established.

Following rules below apply when DMA supports the CAB extension.



In general, the rules set by default is perfect, but may need a tweaking in some scenarios, as shown below, the default settings for ADMIN SVM & DATA SVM:



Sunday 2 April 2017

How to remove SVM from the CommVault Cluster NAS Client SVM Tab

There are situations when the SVM detected under the Cluster NAS client is incorrectly shown or for some reason needs a refresh.


I remember a case working with CommVault last year in 2016, where in a customer had a Metro Cluster in place, after performing a Switch over from Destination -> to -> Source Metro Cluster, IntelliSnap [Hardware Snap] operations failed.

Solution turned out to be very simple in that case:
1. Remove the SVM [That failed Snap operation] from the Source Metro Cluster.
2. Detect and add it again.

I no longer work for CommVault, but I am still trying to remember this case, rather scratching my head, but I am guessing the logic was, after the switch over from Destination MC to Source MC, the destination SVM with 'mc' as suffix, becomes active as expected, and I suppose it should now be serving Data to Clients from the DC1. The idea behind switching over is to simulate the real life fail-over scenario.

So, ideally SVM_DC2-mc should now be 'ACTIVE' under the Source MC NAS Cluster Client, but b'cos of the switch over mechanism, which is purely Netapp process, there are very good chance that the 'switch' over process did not register with CommVault DB. It may perhaps needs a refresh.

Therefore, a refresh is what I call a 'SVM Remove & Detect' Process on the CommVault side.

Steps:
1. Go to the Source Metro Cluster NAS Cluster Client properties | SVM Tab | and make a note of the SVM that you would like to remove in order to refresh it.



2. Now, go to the specific SVM NAS Client properties | and uncheck the box and click ok.



3. To confirm, go back to the Source Metro Cluster NAS client properties | under SVM Tab, now you should see the SVM missing.



4. Finally, just perform the detect and re-add it.




Attempt another IntelliSnap operation, wish you good luck! If no luck, get in touch with CommVault Support for a CVLT.NET session ;)

How SAN Protcols - iSCSI & FC interact with NetApp Storage Array

                            "A picture is worth a thousand words"