Friday 31 March 2017

NDMP DUMP methods, walking inode file Vs logical traversal

There are two methods dump uses to determine which file goes into a dump - the "walking inode file" path and the "logical traversal" path. 

Walking inode file: This approach goes through all the inodes in the inode file and decides which file goes into the dump.

Logical traversal :This approach does a logical traversal on the subtree to be dumped. When dumping a qtree tree, the first approach is used. 

However, under the certain conditions, it becomes more expensive to do "walking inode file" than "logical traversal".

For example - When the volume has a lot of used inodes; and The qtree is very small relative to the size of the volume it resides in.

In Cluster data ontap or ONTAP 9, you can determine which method dump is currently set to by entering the following command -

cluster_ontap_9::*> vserver services ndmp show -vserver vserver_name




This switch determines the two methods as shown in figure below.
switch = [-dump-logical-find <text>] (privilege: advanced)



This option mentioned above, specifies whether to follow inode-file walk or tree walk for phase I of the dump. Choosing inode-file walk or tree walk affects the performance of the dump.

This option [-dump-logical-find <text>] can take following values:

If default is specified [which is the default setting], then level 0 and incremental volume as well as qtree dumps will use inode walk.

If always is specified, all the subtree dumps will use tree walk, and all dumps will follow treewalk.

Thursday 30 March 2017

Vserver tunneling in Cluster Mode Ontap with CommVault

What is Vserver tunnelling in Cluster Mode Ontap with CommVault Array Management GUI?

The mechanism of accessing Vserver APIs through a cluster-management interface is called Vserver tunnelling.  Data ONTAP responds to a tunnelled API based on the tunnel destination, target interface and the API family.

For example:
Data ONTAP Vserver APIs can be executed if they are sent through a cluster management LIF to an Admin Vserver.


NDMP restartable backup supported with ONTAP 9.0RC1 onwards with CommVault v11

Unlike Data ONTAP for 7-Mode, clustered Data ONTAP did not support the NDMP Backup Restart Extension until Ontap 9.

However, this has changed now, Restartable dump backup is now supported with - Clustered Data ONTAP 9.0RC1 onwards with CommVault v11 as shown in figure below.

                                                     CommVault v11



CommVault v10


Following versions does not support DUMP restart:
Clustered Data ONTAP 8.3.x
Clustered Data ONTAP 8.2.x
Clustered Data ONTAP 8.1.x
Clustered Data ONTAP 8.0.x

How it works:
A dump backup sometimes does not finish because of internal or external errors, such as tape write errors, power outages, accidental user interruptions, or internal inconsistency on the storage system. If your backup fails for one of these reasons, you can restart it. You can choose to interrupt and restart a backup to avoid periods of heavy traffic on the storage system or to avoid competition for other limited resources on the storage system, such as a tape drive. You can interrupt a long backup and restart it later if a more urgent restore (or backup) requires the same tape drive.

Restartable backups persist across reboots. You can restart an aborted backup to tape only if the following conditions are true:
1. The aborted backup is in phase IV.
2. All of the associated Snapshot copies that were locked by the dump command are available.
3. The file history must be enabled.

How to disable NDMP File History for CommVault DMA

This articles applies to NetApp FAS, 7-mode and cDOT, and CommVault

What is file history and how is it communicated?
File history is generated during an Network Data Management Protocol (NDMP) backup of a volume hosted on NetApp storage using the dump engine. File history enables a Data Management Application (DMA) to build an index database over all the files in a backup. This database enables users to locate which backup contains a particular file, when that file was modified, and other useful meta data.

File History feature provides two benefits:
1. Provide a human-readable user interface to backup data.
2. Provide a basis for Direct Access Recovery (DAR). DAR allows a DMA to access files / directories directly on tape without having to traverse the entire backup. This allows for quicker file and directory recovery operations.

During what PHASE does File History generation occurs?
File history generation occurs in phase 3 and 4 of the dump process

File History can also lead to slower backups in certain environments due to 'backup pressure' as known in the NetApp world.
In general, file history adds overhead to a NDMP backup. A backup will typically always run faster with file history disabled than a backup with full file history capabilities, even with no other performance issues. This is due to processing overhead required to generate, communicate, and ingest the additional data.

Three main causes for slow NDMP Backup with FILE HIST = T, turned on.
1. Lack of computing resource on DMA [Physical/VM] - DMA = Media Agent
2. Slow Disk performance of the Index Drive.
TIPS:
Put Index Cache on SSD
Put Job results directory on SSD
<>:\Program Files\CommVault\Simpana\iDataAgent\JobResults
3. Slow/Lossy Network between DMA & NDMP Server.

I would like to disable FILE HIST temporarily for NDMP backups with my CommVault DMA, how can I do so ?
It's simple, just add the following regkey to your DMA [Media Agent] server.

Please note: Tested on v10, but should work on v11 as well.

Under the 'NAS' key, create another key (not a value) named "BackupHostOptions":

HKEY_LOCAL_MACHINE\SOFTWARE\CommVault Systems\Galaxy\Instance001\NAS\BackupHostOptions

In that key add a string value having exactly the name of the NAS client and set it to HIST=N, as shown in the figure below.


Wednesday 29 March 2017

NDMP dump backup and NDMP dump Levels

NDMP dump backup and NDMP Levels on NetApp FAS

Dump is a Snapshot copy-based backup and recovery solution used by NetApp to back up files and directories from a Snapshot copy to a Disk/tape device.

You can back up your file system data, such as directories, files, and their associated security settings by using the dump backup. You can back up an entire volume, an entire qtree, or a subtree that is neither an entire volume nor an entire qtree.

When you perform a dump backup, you can specify the Snapshot copy to be used for a backup. If you do not specify a Snapshot copy for the backup, the dump engine creates a Snapshot copy for the backup, and after the backup operation is completed, the dump engine deletes this Snapshot copy.

NDMP = Mechanism + Protocol
M= dump, tar, cpio
P= TCP/IP + XDR

NetApp use = dump

Ontap 7-mode dump Levels:
With ontap 7-mode : You can perform level-0 Full, incremental[1-9], or differential[1] backups to tape/disk by using the dump engine.

Level 0 or Full Backup:
A Full Backup provides a backup of all the data in the selected path.

Level 1 through 9 or Incremental Backup:
The Incremental Backups base themselves on the most recent lower-level Incremental Backup, and include any data that has changed or is new since the last Full or Incremental Backup. 

Maximum number of consecutive incremental backups permitted after a full backup is 9, as shown below.


After a differential backup, the maximum is 8, as shown below.




dump levels on Clustered Data ONTAP 8.3 onwards:

Clustered Data ONTAP 8.3 onwards supports 32 levels of dump Backups.

Level 0 = is a Full Backup.
Level 1 through Level 31 are Incremental Backups.

Maximum number of consecutive incremental backups permitted after a full backup is 31, as shown below.



Please Note: For data ontap versions prior to 8.3,the maximum number of consecutive incremental backups permitted after a full backup is 9. After a differential backup, the maximum is 8 just like 7-mode.

For IntelliSnap [SnapDiff] NAS backup, there is no such limitation, whatever is the Volume snapshot limit applies, for NetApp systems, a volume can have maximum 255 snapshots.

Tuesday 28 March 2017

How to collect NetApp cluster ontap logs

Following instructions applies to both Cluster Ontap 8.x, and ONTAP 9.x.

Collecting logs in Cluster Ontap is made very easy with GUI access.

1. Identify Cluster management LIF IP of your cluster. If you don't know you can simply run this command.

cluster_ontap_9::> network interface show
cluster_ontap_9
               cluster_mgmt up/up    192.168.0.240/24   cluster_ontap_9-02  e0d     false
                                                                 
In this case, clust_Mgmt LIF =  192.168.0.240

2. Open any browser window and type the cluster mgmt LIF IP with SPI as suffix [SPI=Storage Processor Interface] as shown below.

http://192.168.0.240/spi, enter the Admin & password.



3. Now, you should see the Cluster Physical Nodes and the corresponding logs link.



4. Simply click on the Node's Logs to fetch the logs files.



For NDMP logs, you need to go to /mlog directory.

Sunday 26 March 2017

Flexgroup, Infinite volume and Flexvol

Introduced:
Flexvol             -> In Data Ontap 7 [2005]
Infinite vol        -> In Data Ontap 8.1.1 [2012]
Flexgroup vol   -> In ONTAP 9.1 [2017]

Capacity:
Flexvol: A Flexvol can serve up to 2 billion files to a maximum capacity of 100TB.
Infinite Volume: An Infinite Volume can serve up to 2 billion files to a maximum capacity of 20PB.
Flexgroup vol: A Flexgroup Volume can serve up to 400 billion files to a maximum capacity of 20PB.

Nice to know :Maximum number of constituents for a FlexGroup is 200. Since the max Flexvol [basic unit] volume size is 100TB and the max file count for each volume is 2 billion, simple maths gives this figure : 200 x 100TB = 20PB, 200 x 2 = 400 billion files.

Physical bondage:
Flexvol            : Tied to single aggregate and a single node.
Infinite vol      : Tied to multiple aggregates on multiple nodes.
Flexgroup vol : Tied to multiple aggregates on multiple nodes.

Limitation:
Flexvol       : 100TB max size [single Flexvol] - Single Namespace Metadata per Volume
Infinite vol  : 20PB  max size [single Flexvol] - Single Namespace Metadata per Volume
Flexgroup vol  :  20PB  max size [Multiple Flexvols joined together] - No such limitation

Snapshot:
Flexvol:              :Single Snapshot copy across the volume.
Infinite Volume  :Single Snapshot copy that runs across a single large-capacity container.
Flexgroup vol     : Multiple Snapshot copy taken at the same time [All or fail] across a single large-capacity container.

Best suited workloads:
Flexvol: For most use cases, FlexVols are perfect.
Infinite Volume: Best suited for Workloads that are write once, update rarely, with an average file size >100KB [Non-sensitive latency workloads]
Flexgroup vol: Best suited for workloads that are heavy on ingest (a high level of new data creation), heavily concurrent, and evenly distributed among subdirectories.

How to create FlexGroup in ONTAP 9.1

ONTAP9::> flexgroup deploy -vserver CIFS_FG -size 10G -type RW -space-guarantee volume -foreground true -volume FG_CIFS

Warning: FlexGroup deploy will perform the following tasks:

 The FlexGroup "FG_CIFS" will be created with the following number of constituents of size 640MB: 16. The constituents will be created on the following aggregates:

aggr0_ONTAP9_01_DATA1
aggr0_ONTAP9_01_DATA2
aggr0_ONTAP9_02_DATA1
aggr0_ONTAP9_02_DATA2

Do you want to continue? {y|n}: y

[Job 62] Job succeeded: The FlexGroup "FG_CIFS" was successfully created on Vserver "CIFS_FG" and the following aggregates: aggr0_ONTAP9_01_DATA1,aggr0_ONTAP9_01_DATA2,aggr0_ONTAP9_02_DATA1,aggr0_ONTAP9_02_DATA2
ONTAP9::>


Observation:
This is a 2-Node SIMBOX Cluster. Looks like In order to create a FlexGroup, each NODE requires minimum '2' DATA Aggregates, so total 4 for 2 Node Cluster.

In this example, I created a FlexGroup of Volume Size 10G, and ONTAP automatically divided 10G in 16 CHUNKS of 640MB => 16 x 640 = 10240MB = 10GB.

It appears that FG requires:
1. Minimum 2 Data aggregates on each Node.
2. It Divides the volume[whatever] size in 16 CHUNKS, 4 in each of the aggregate.
I.e. 10G / 4 = 2.5 GB on each aggregate.


For more info:

Flexgroup volume  :http://www.netapp.com/us/media/tr-4557.pdf

Infinite volume      :https://www.netapp.com/us/media/tr-4037.pdf

Flexvol volume     :https://www.netapp.com/us/media/tr-3356.pdf

Saturday 25 March 2017

Could not open an NDMP connection to host. Please verify the NDMP server is running on that host when adding NAS iDA to CommVault v10 & 11.

Following error is seen when adding a NAS iDA NetApp 7-mode client in the CommCell v10.



Customer reported : Telnet to Port 10000 on the NAS Host works.

Telnet to Port 10000 is a good troubleshooting step, however in cases like this, it isn't helpful.

Reason : Telnet to Port '10000' only signifies NDMP server is listening, but we need to find out why is it refusing to speak, and reporting this error - 'Please verify NDMP server is running on that host'.

Cause: NDMP server [FILER] is definitely listening on Port 10000, but it is refusing connection requests, b'cos its NDMP memory pool is Full.

Initial troubleshooting steps:
Ensure NDMP is turned on.
FILER>options ndmpd.enable
ndmpd.enable           on

That indicates, NDMP server is definitely listening, however there is something which is preventing the communication.

First clue, as reported in EvMgrS.log: Connection refused by the NDMP server



This is the first clue, which kind of give us an indication that the NDMP server is unable to accept any more connections.

Second clue, look for stale NDMP sessions on the filer:
FILER>backup status
This will tell you if there is any existing NDMP backups running on the filer. In this case, there were none.
FILER>ndmpd status

In this particular case, we found there were lot of stale NDMP sessions sitting idle and doing nothing, in other words, they were simply holding on to NDMP memory and hence stopping NDMP to accept new connection.

Please note, If there is nothing in the backup status output, then you can safely kill all the stale ndmp sessions that are shown via ndmpd status output.

Solution:
1. Kill all the stale sessions.
FILER>ndmpd killall

2. Turn off and turn On NDMP [This step may be required, but not necessary]
FILER>options ndmpd.enable off
FILER>options ndmpd.enable on

3. Try to add the NDMP NAS iDA client once again, this time it should succeed.


Please Note: For more information on available ndmpd commands, simply type 'ndmpd' on the putty console for 7-mode;
FILER> ndmpd
usage:  ndmpd [on|off|status|probe [<session #>]|kill <session #>|killall|

Ports usage during Out of Place restore using CommVault IntelliSnap & NDMP for v10 & v11

NetApp CDOT: [IntelliSnap Restore]

Out-of-place IntelliSnap restore to 'LINUX', log files to look at:

On CommServe:
File    : CVNasSnapRestore.log
File    : fsIndexedRestore.log
File    : JobManager.log
File    : StartRestore.log


On LINUX BOX:
File    : CVNRDS.log
Location: /opt/simpana/iDataAgent/CVNRDS

Source & destination machines during Intelli Snap restore: Out of place
Source: FILER
Dest  :   LINUX

CVNasSnapRestore.log: Ports during Data connection establishment
6968  187c  03/30 21:30:22 9471 ndmp_v4.cpp 2532 NDMP_DATA_LISTEN: successful
6968  187c  03/30 21:30:22 9471 Connect Data Servers() - Sending CAB prepare to destination ...
6968  187c  03/30 21:30:22 9471 Connect Data Servers() - Sending data_connect to destination ...
6968  187c  03/30 21:30:22 9471 ndmp_v4.cpp 2143 NDMP_DATA_CONNECT(tcp):
6968  187c  03/30 21:30:22 9471 ndmp_v4.cpp 2147 --- address[0xc0a80064] port[18601] <---- On FILER
6968  187c  03/30 21:30:22 9471 ndmp_v4.cpp 2147 --- address[0xc0a8003c] port[18601] <---- On FILER

[root@redhatcentos Log_Files]# netstat -anp | grep 192.168.0.10
tcp        0      0 192.168.0.25:55564          192.168.0.100:18601         ESTABLISHED 13609/CVNRDS    <-----On LINUX

-----------------------------------------------------------------------------------------------------
NetApp CDOT: [NDMP Restore]

Out-of-place NDMP restore to 'LINUX', log files to look at:

On CommServ:
File    : CVD.log
File    : CVNdmpRemoteServer.log
File    : fsIndexedRestore.log
File    : JobManager.log
File    : MediaManager.log
File    : nasRestore.log
File    : StartRestore.log

On LINUX BOX:
File    : CVNRDS.log
Location: /opt/simpana/iDataAgent/CVNRDS

Source & destination machines during NDMP restore: Out of place
Source: MA
Dest  :  LINUX  

nasrestore.log: Ports during DATA connection establishment:
1436  1338  03/30 21:11:40 9470 ndmp_v4.cpp 2735 NDMP_MOVER_LISTEN: successful
1436  1338  03/30 21:11:40 9470 ndmp_v4.cpp 2742 --- address[0xc0a8000a] port[60700] <--- On MA
1436  1338  03/30 21:11:40 9470 ndmp_v4.cpp 2143 NDMP_DATA_CONNECT(tcp):
1436  1338  03/30 21:11:40 9470 ndmp_v4.cpp 2147 --- address[0xc0a8000a] port[60700] <----On MA
1436  1338  03/30 21:11:40 9470 ndmp_v4.cpp 2192 NDMP_DATA_CONNECT: successful

Ports on  MA:192.168.0.10
C:\Windows\system32>netstat -anp tcp | findstr 192.168.0.25
  TCP    192.168.0.10:59117     192.168.0.25:42372     ESTABLISHED
  TCP    192.168.0.10:60341     192.168.0.25:8402      ESTABLISHED
  TCP    192.168.0.10:60523     192.168.0.25:57990     ESTABLISHED
  TCP    192.168.0.10:60539     192.168.0.25:59445     ESTABLISHED
  TCP    192.168.0.10:60610     192.168.0.25:34059     ESTABLISHED
  TCP    192.168.0.10:60700     192.168.0.25:40163     TIME_WAIT   <-----Data Pipe connection

Ports on Linux:192.168.0.25
[root@redhatcentos Log_Files]# netstat -anp | grep 192.168.0.10
tcp        0      0 192.168.0.25:8400           192.168.0.10:60655          TIME_WAIT   -                  
tcp        0      0 192.168.0.25:34059          192.168.0.10:60610          ESTABLISHED 3134/cvd           
tcp        0      0 192.168.0.25:40163          192.168.0.10:60700          ESTABLISHED 12117/CVNRDS    <-----Data Pipe connection   

Thursday 23 March 2017

Global namepace a game changer for Cluster Ontap and an edge over retiring 7-mode Ontap.

Global namespace enables the NAS clients to access data scattered across different physical location using single volume namespace. The share could originate from any node, any aggregate  or any path.

Simplest definition of the term 'namespace' in Cluster Ontap : A namespace is a logical grouping of different volumes joined together at junction points to create a 'single logical volume'.

Limitation with Ontap 7-mode:
Shares are physically mapped to server name or IP address - Difficult to scale-out and complex to remember and manage thousands of volumes, as there are no ways to join the volumes to the rootvolume.

Advantage of c-mode/Cluster ontap/ just ONTAP with version 9:
In Cluster-Mode, NAS clients can use a single NFS mount point or CIFS share to access a namespace of potentially thousands of volumes. The root volume for a Vserver namespace contains the paths where the data volumes are junctioned into the namespace.

In Cluster-Mode,  ONTAP can create a very large data container, a single namespace, for many volumes.

NAS clients can access data anywhere in the namespace using a single NFS mount point or CIFS share.

What it means to end-users : User does not have to remember all the names & location of the shares, they just need to remember the SVM name.


What it means to Storage Admin: Easy to scale-out, can easily enlarge the container, Non-disruptive upgrade and Non-disruptive operation.


Security wise: Storage Admins can secure each volume share so that it is only available to the legitimate consumers.


Global namespace : Is basically achieved by joining the volumes together with junctions.

1. The root volume of a Vserver serves as the entry point to the namespace provided by that Vserver.

All other 'Data' volume shares sitting on different aggregates from different Nodes gets junctioned at rootvolume '/', like a directories. As shown in the example:


2. In the unlikely event that the root volume of a Vserver namespace is unavailable, NAS clients cannot access the namespace hierarchy and therefore cannot access data in the namespace.

WARNING: If the rootvolume is offline, then CIFS or NFS shares access is stopped. This is where LoadSharing mirror comes-in.

For this reason, it is a NetApp best practice to create a load-sharing mirror for the root volume on each node of the cluster so that the namespace directory information remains available in the event of a node outage or failover.

Example for creating a Load Sharing mirror.
cluster_ontap_9::> snapmirror create -source-path //NFS_N2/NFS_N2_root -destination-path //NFS_N2/NFS_N2_root_mirror -type LS -schedule hourly
[Job 609] Job succeeded: SnapMirror: done

cluster_ontap_9::*> snapmirror show -type LS
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
cluster_ontap_9://NFS_N2/NFS_N2_root
            LS   cluster_ontap_9://NFS_N2/NFS_N2_root_mirror
                              Uninitialized
                                      Idle           -         -       -
cluster_ontap_9::*> snapmirror initialize-ls-set -source-path
    CIFS_ONTAP9_N1:         CIFS_ONTAP9_N1:<volume> CIFS_ONTAP9_N2:
    CIFS_ONTAP9_N2:<volume> NFS_N2:                 NFS_N2:<volume>
cluster_ontap_9::*> snapmirror initialize-ls-set -source-path //NFS_N2/NFS_N2_root
[Job 611] Job is queued: snapmirror initialize-ls-set for source "cluster_ontap_9://NFS_N2/NFS_N2_root".

cluster_ontap_9::*> snapmirror show -type LS
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
cluster_ontap_9://NFS_N2/NFS_N2_root
            LS   cluster_ontap_9://NFS_N2/NFS_N2_root_mirror
                              Snapmirrored
                                     Idle           -         true    -
cluster_ontap_9::*>

Tuesday 21 March 2017

What is ONTAP 9, ONTAP select and ONTAP Cloud ?

These are the three different variants of Ontap 9 software, introduced with Ontap 9 version. Please note - NetApp's flagship OS “Clustered Data ONTAP” is now simply called “ONTAP”, and the new release is called “ONTAP 9”.

Ontap 9  =  FAS systems [Controller + NetApp Disk Shelves] - True traditional HA, no mirroring of aggregates, as each node sees partners disks.

Ontap 9 Select = Commodity Hardware [software-defined version of ONTAP, runs on top of Non-NetApp storage] - Emulated HA with mirroring of aggregates.

Ontap 9 Cloud =  AMAZON, AZURE  [Ontap OS as service] - Cloud HA - mirroring of aggregates.

My curiosity was mainly focused around this question - How does HA works with commodity Hardware [Private cloud] and Cloud based [Public Cloud] storage.

Let's try to understand the difference between traditional HA vs Ontap Cloud/Select HA?

Traditional HA : This HA only applies to FAS systems where Ontap runs on top of NetApp controller attached Disk shelves.

The basic concept behind traditional HA is - Both Controllers see the Disks, in other words, you attach each controller to:
1. It's own Disk Shelves
2. To partner Disk Shelves

Plus, there is HA NVRAM Interconnect [Infiband] that continuously mirrors partners NVRAM log.

NetApp FAS Arrays [Basically NetApp Provided Storage] use specialized hardware to pass information between HA pairs in an ONTAP cluster.

Software-defined environments [ONTAP Select], however, do not tend to have this type of equipment available (such as Infiniband ), so an alternate solution is needed. Although several possibilities were considered, ONTAP requirements placed on the interconnect transport required that this functionality be emulated in software.

As a result, within an ONTAP Select cluster, the functionality of the HA interconnect (traditionally provided by hardware) has been designed into the OS, using Ethernet as a transport mechanism.

For ONTAP select HA, you can read this TR:
http://www.netapp.com/us/media/tr-4517.pdf

Cloud HA: Here, again there is no NetApp Hardware or controller, just the Ontap software as service. Hence, there is no question of cabling the Disk Shelves to each other, b'cos the Storage is provided by the Cloud Provider.

The basic concept behind cloud/Select HA is - Storage is not shared between nodes. Instead, data is synchronously mirrored between the nodes so that the data is available in the event of failure. Basically additional storage space is needed for mirroring.

Example - When you create a new volume, Cloud Manager allocates the same number of disks to both nodes, and creates a mirrored aggregate, and then creates the new volume. For example, if two disks are required for the volume, Cloud Manager allocates two disks per node for a total of four disks.

Note: Clients should access volumes by using the floating IP address of the node on which the volume resides. If clients access a volume using the floating IP address of the partner node, traffic goes between both nodes, which reduces performance.

For ONTAP Cloud HA:
https://library.netapp.com/ecmdocs/ECMLP2484726/html/GUID-62F55FF3-9D4A-4C77-8F1D-C0CB7268051B.html

For Ontap 9 features:
https://whyistheinternetbroken.wordpress.com/2016/06/23/ontap9rc1-available/

For Ontap 9.1 features:
https://whyistheinternetbroken.wordpress.com/2017/01/12/ontap-91-ga/

Please Note: The information provided on this subject is based purely on my understanding, for corrections please feel free to leave your comments.

Monday 20 March 2017

How to force RedHat Linux operating system to recognize Disks that are added or removed from the Fabric.

How to force Redhat Linux operating system to recognize Disks that are added or removed from the Fabric. .

There are several methods that you can use to force the Linux operating system to recognize disks that are added or removed from the fabric.

These are the two most common techniques: [Applies to both FC/iSCSI LUN]


1. Rescan the SAN by restarting the host
Result: A bus rescan is automatically performed when restarting the system. However, this option may not be practical or allowed to do.

2. Rescan the SAN by echoing the /sys filesystem

For Linux 2.6 kernels, a rescan can be triggered through the /sys interface without having to unload the host adapter driver or restart the system.

The following command format scans all channels, targets, and LUNs on host H.
echo “- - -” > /sys/class/scsi_host/host#/scan

Use the following command shown in the figure below to obtain the host# , this example is for the iSCSI LUN.



Once the host# is identified, run this command.
echo "- - -" > /sys/class/scsi_host/host3/scan

This should attach the LUN to SCSI Sd-disk.

Please Note: I could do - iSCSI --rescan, but it is more practical to perform manual specific scan using 'echo', rather than doing iSCSI --rescan, b'cos a single target may have multiple logical units and/or portals.

In case of FC [LUN] HBAs, use 'fc_host' to determine FC HBAs first:
# ls /sys/class/fc_host
host0  host1

In this case, you need to scan host0 & host1 HBA using the same command as mentioned above.
# echo “- – -” > /sys/class/scsi_host/host0/scan
# echo “- – -” > /sys/class/scsi_host/host1/scan

Sunday 19 March 2017

How to present NetApp Cluster Mode LUN to Redhat Linux via iSCSI Protocol

On the SVM side: I assume the iSCSI License is added and the iSCSI service is up and running.
1. Carve out a volume, or let it create during LUN create process via gui..
2. Create a LUN and also create a iGROUP and add the 'Redhat Linux' Initiator to this iGROUP and make sure you it says online and mapped.

Note: Use this command to obtain the Redhat Initiator:
[root@redhatcentos Desktop]#cat /etc/iscsi/initiatorname.iscsi

On the Redhat side: Just remember these three letters- D,L,S [Discovery, Login & Scan]
[root@redhatcentos Desktop]# iscsiadm -m discovery -t st -p 192.168.0.29:3260
[root@redhatcentos Desktop]# iscsiadm -m node -l
Note: If you see any old stale connections, just get rid of them.
[root@redhatcentos Desktop]# iscsiadm -m node -o delete -T iqn.1992-08.com.netapp:sn.4082367740
[root@redhatcentos Desktop]# iscsiadm -m session --rescan
Rescanning session [sid: 1, target: iqn.1992-08.com.netapp:sn.34636feba74e11e6a5bb000c2900a32a:vs.2, portal: 192.168.0.29,3260]

Finally, run the following command to see the LUN:
[root@redhatcentos Desktop]# multipath -ll
3600a09807770457a795d4a6736723230 dm-0 NETAPP,LUN C-Mode
size=1.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=4 status=active
  `- 3:0:0:0 sdb 8:16 active ready running
[root@redhatcentos Desktop]#

If you don't have multipath module driver installed, you can use this command;
[root@redhatcentos Desktop]# lsscsi

Saturday 11 March 2017

List of key articles Published so far on Slideshare

List of key articles Published so far

1. CommVault data protection transition from data Ontap 7 mode to clustered data Ontap.
2. FAQ on SnapDrive for UNIX
3. NetApp Ontap Simulator
4. CommVault v10-SP14 enables browse functionality for Clustered Ontap NetApp
5. Browse capability on NetApp Ontap from CommVault v10 & v11
6. SnapDiff detailed
7. Multiple array management entries for the same array cannot be added
8. Systemshell on Clustered ONTAP 8.3RC1 Simulator
9. Microsoft ODX (Offloaded Data Transfers)
10. SnapDrive for UNIX storage wizard create times out
11. ALUA [Asymmetric Logical Unit Access]
12. How to install puppet agent on windows
13. Block reclamation
14. How to extend ESXi VMFS datastore on NetApp iSCSI storage
15. How to join vmware esxi to domain
16. Difference between standlone hyper-v vs role based
17. Backup workflow for SMHV on windows 2008R2 HYPER-V
18. How to extend partition for windows 2003 vm in hyper v
19. How to view common mini-filter file system driver
20. No snapshot backup relationships are found in the registered storage systems
21. Destination is in use during vol copy [NetApp]
22. FAQ on Dedupe NetApp
23. Troubleshooting CPU bottleneck issues netapp
24. FAQ on NetApp c-mode terms
25. Exiting; no certificate found and waitforcert is disabled
26. Tool : Sysctl
27. iscsid remains stopped in redhat EL 6
28. KDC reply did not match expectations while getting initial credentials
29. HYPERV-2012-LIVE_MIGRATION-ERROR-0x80090303
30. What is NetApp system firmware
31. NetApp Disk firmware update
32. unable to validate host : NetApp management console
33. Smhv snapinfo diretory path not valid error
34. How to schedule snapdrive space reclamation NetApp
35. How netapp dedupe works
36. Difference between LUN and igroup os type in NetApp world
37. How to identify storage shelf type for NetApp
38. NetApp cluster failover giveback: FAQ
39. NetApp storage efficiency dashboard
40. OSSV [Open System SnapVault]
41. Firmware upgrade on netapp filer
42. Linux boot process: Red Hat
43. Ways to access ntfs from linux
44. Understanding storage das-nas-san

For download, kindly get in touch @ : http://www.slideshare.net/AshwinPawar/

Thursday 9 March 2017

DELL ate EMC, and now HP ate Nimble, NetApp the only startup to stay independent and stand alone for more than 25 Years. Simply called - Worlds No. 1 Storage Operating System Company.

NIMBLE storage to be bought by HPE for $ 1.09 Billions.
https://www.wsj.com/articles/hp-enterprise-to-acquire-nimble-storage-for-about-1-billion-1488890704

NIMBLE could not sustain independently on its own. Their shares plummeted since their NASDAQ inception in 2013, from 21 to 8 dollars, and jumped to 12.5 $ after this annocement.

Similarly, EMC was bought out by Dell couple of years back.

NetApp is clearly the winner here. Even after 25 years, they are still own their own and standing tall, though they acquired some companies to strengthen their FLASH portfolios, but they never allowed their own identity to be sold.  You cannot sale your soul, and perhaps this what makes NetApp different from other vendors.

Hopefully, NetApp continues to battle the other two giants [Dell & HPE] on their own and keep churning out quality products and quality support.  Though I know their support is not great, and there is plenty of room for improvement in this area.