Thursday, 9 March 2017

DELL ate EMC, and now HP ate Nimble, NetApp the only startup to stay independent and stand alone for more than 25 Years. Simply called - Worlds No. 1 Storage Operating System Company.

NIMBLE storage to be bought by HPE for $ 1.09 Billions.
https://www.wsj.com/articles/hp-enterprise-to-acquire-nimble-storage-for-about-1-billion-1488890704

NIMBLE could not sustain independently on its own. Their shares plummeted since their NASDAQ inception in 2013, from 21 to 8 dollars, and jumped to 12.5 $ after this annocement.

Similarly, EMC was bought out by Dell couple of years back.

NetApp is clearly the winner here. Even after 25 years, they are still own their own and standing tall, though they acquired some companies to strengthen their FLASH portfolios, but they never allowed their own identity to be sold.  You cannot sale your soul, and perhaps this what makes NetApp different from other vendors.

Hopefully, NetApp continues to battle the other two giants [Dell & HPE] on their own and keep churning out quality products and quality support.  Though I know their support is not great, and there is plenty of room for improvement in this area.

Friday, 26 July 2013

Offloaded Data Transfers (ODX) support with NetApp

Microsoft Offloaded Data Transfer (ODX), also known as copy offload, enables direct data transfers within or between compatible storage devices without transferring the data through the host computer.

While standard reads and writes work well in most scenarios, but what if the data intending to be copied is located on virtual disks managed by the 'same Storage Array' in the backend. This means that the data is moved out of the array, onto a server, across a network transport, onto another server, and back into the same array once again. The act of moving data within a server and across a network transport can significantly impact the availability of those systems; not to mention the fact that the throughput of the data movement is limited by the throughput and availability of the network.

Support for ODX starts with Windows 2012 Server and Windows 8. Applications can now take advantage of these capabilities to help offload the process of data movement to the storage subsystem. Two new FSCTLs (FSCTL_OFFLOAD_READ & FSCTL_OFFLOAD_WRITE) are introduced starting with Windows 2012 Server and Windows 8 that facilitate a method of offloading the data transfer.

This shifts the burden of bit movement away from servers to bit movement that occurs intelligently within the storage subsystems. The best way to visualize the command semantics is to think of them as analogous to an unbuffered read and an unbuffered write.

Requirements for using ODX with NetApp Storage Array:
Data ONTAP version requirements
Clustered Data ONTAP 8.2 and later releases support ODX for copy offloads.

Related article on 'How to trace ODX transfer':
http://www.slideshare.net/AshwinPawar/odx-42682251

IMPORTANT:
For CIFS environment, SMB 3.0 support is only available in clustered Data ONTAP 8.2. Data ONTAP 8.2 running 7-mode does not support SMB 3.0.

Please note Data ONTAP supports ODX for both the CIFS and SAN protocols. The source can be either a CIFS server or LUNs, and the destination can be either a CIFS server or LUNs.

Requirements for using ODX with Windows server and client requirements
Starting with Windows 2012 Server and Windows 8.

Use cases for ODX
https://library.netapp.com/ecmdocs/ECMP1196784/html/GUID-BAD66DF1-2AB5-4CB2-BF53-068E4B4D94A3.html

Courtesy:
http://msdn.microsoft.com/en-us/library/windows/hardware/dn265282(v=vs.85).aspx#feedback

Understanding the difference between traditional cluster Disk Vs. Cluster Shared Volumes (CSV)  ?

I touched base with Windows after a substantial gap, and that happened during the course of MCSE 2012 Server Infrastructure certification, I got introduced to various new features and technologies that were introduced in Server 2012, and one of which I  liked the most is the improvement in CSV 2.0. The whole concept of CSV and it's benefits over the traditional cluster made it more exciting to read.

CSV stands for Cluster Shared Volumes (CSV) is a feature of Failover Clustering first introduced in Windows Server 2008 R2 for use with the Hyper-V role. A Cluster Shared Volume is a shared disk containing an NTFS volume that is made accessible for read and write operations by all nodes within a Windows Server Failover Cluster.

In Windows Server 2012, its further improved and turned into a FULL BLOWN file system. Just like a standard file system, it is now compatible with filter driver (supports applications such as antivirus/backup etc.).

Going back to the main subject :
To understand how Cluster Shared Volumes (CSV) works in a failover cluster, it is helpful to review how a traditional cluster works without CSV. In a traditional cluster , a failover cluster allows a given disk (LUN) to be accessed by only one node at a time. Given this constraint, each Hyper-V virtual machine in the failover cluster requires its own set of LUNs in order to be migrated or fail over independently of other virtual machines. In this type of deployment, the number of LUNs must increase with the addition of each virtual machine, which makes management of LUNs and clustered virtual machines more complex.

In contrast, on a failover cluster that uses CSV, multiple virtual machines that are distributed across multiple cluster nodes can all access their Virtual Hard Disk (VHD) files at the same time, even if the VHD files are on a single disk (LUN) in the storage. The clustered virtual machines can all fail over independently of one another.

I am still reading some of these stuff and there are plenty already covered in various blogs and on Microsoft site. I guess, all I can do is provide some pointers to these reference material.

What is the role of SMB in Hyper-V Cluster based on CSV?
Cluster Shared Volumes operates by orchestrating metadata I/O operations between the nodes in the cluster via the Server Message Block protocol.

What is Coordinator Node ?
The node with 'ownership of the LUN' orchestrating metadata updates to the NTFS volume.

Advantage of CSV over traditional cluster  comes during LIVE MIGRATION.
CSV reduces the potential disconnection period at the end of the migration since the NTFS file system does not have to be unmounted/mounted as is the case with a traditional cluster disk.

What is 'single name space' term used in CSV?
CSV builds a common global namespace across the cluster using NTFS reparse point. Volumes are accessible under the %SystemDrive%\ClusterStorage root directory from any node in the cluster.

How do I create CSV?
There is nothing different which needs to be done for CSV.  CSV support iSCSI, Fibre Channel and Serial Attached SCSI (SAS) for storage.  CSV will work with any of these, so long as the disk is using NTFS as the file system.

Benefits:
CSV will provide many benefits, including easier storage management, greater resiliency to failures, the ability to store many VMs on a single LUN and have them fail over individually, and most notably, CSV provides the infrastructure to support and enhance live migration of Hyper-V virtual machines.

With CSV, you can use live migration to move VMs from a Hyper-V host that needs maintenance to another Hyper-V host. Then when the maintenance is complete, you can move the VMs back to the original host—all with no interruption of end-user services. Live migration also enables you to build a dynamic datacenter that can respond to high resource-utilization periods by automatically moving VMs to hosts with greater capacities; thereby enabling a VM to meet Service Level Agreements and provide end users with high levels of performance, even during periods of heavy resource utilization.

Useful links:
http://blogs.msdn.com/b/clustering/archive/2009/02/19/9433146.aspx

Hyper-V R2 CSV FAQ:
http://en.community.dell.com/techcenter/virtualization/w/wiki/3021.aspx

http://windowsitpro.com/windows-server-2012/windows-server-2012-shared-storage-live-migration

http://technet.microsoft.com/en-us/library/jj612868.aspx

 

Tuesday, 5 February 2013

What is IOM6E?

What is IOM6E in NetApp storage systems.

IOM6, is SAS 2.0 compliant IO Module used in newer NetApp storage systems. SAS-connected disk shelves now account for about 10% of the NetApp installed base and for more than 50% of the storage shipped with new NetApp® systems as of 2011 as a result of technology transition from FC-AL to SAS.This is happening because SAS offers better reliability and resiliency, greater bandwidth, and greatly improved connectivity

So, What does "E" stands for in IOM6E.  Well, there is no official doc from NetApp that says 'E' stands for embeded, but I am guessing so.

About IOM6E:
  • IOM6E is embedded version of IOM6. 
  • The ACPP in the IOM6E runs on the same CPU as the Service Processor (SP). ACP Ethernet traffic is internally isolated from SP traffic.
  • The external Ethernet RJ45 port with the locked-wrench symbol is used exclusively for ACP Ethernet traffic.
  • ACPP in IOM6E provides similar functionality as the ACPP in IOM3/IOM6, but with few exceptions:
a. ACPP is part of the SP in FAS2240/2/4 systems, and will be updated only through the SP's firmware update. In other words, when you upgrade SP, ACPP is automatically updated.
 
b. Unlike IOM3/IOM6, some ACP status is available via SP console CLI.
 
Therefore, your controller may send False alert regarding IOM6E ACP update.When you click on the alert it will take you to the download page, but there is none for ACP firmware. It appears that it is  known bug and will be fixed soon. No action is needed as of now.
 
For more information, please see the following NetApp community post:

Thursday, 25 October 2012

NetApp "stats" command

NetApp "stats" command:

Step 1:
List the available measureable objects on the filer.
filer> stats list objects
Objects:
        cpx
        rquota
        aggregate
        audit_ng
        cifs
        disk
        dump
        ext_cache_obj
        ext_cache
        fcp
        hostadapter
        ifnet
        iscsi_conn
        iscsi_lif
        iscsi
        logical_replication_destination
        logical_replication_source
        lun
        ndmp
        nfsv3
        processor
        qtree
        quota
        raid
        spinhi
        system
        target
        vfiler
        volume
        wafl
        avoa

Step 2:
Find out list of counters available for the objects listed in step1
stats list counters
filer> stats list counters
Counters for object name: cifs
instance_name
node_name
cifs_ops
cifs_latency
cifs_read_ops
cifs_write_ops
Counters for object name: disk
instance_name
node_name
instance_uuid
display_name
raid_name
raid_group
raid_type
disk_speed

To list counters for specific object ?

filer>stats list counters volume
Counters for object name: volume
        instance_name
        node_name
        instance_uuid
        vserver_name
        vserver_uuid
        avg_latency
        total_ops
        read_data
        read_latency
        read_ops
        write_data
        write_latency
        write_ops
        other_latency
        other_ops


Step 3:
Find out the specific instances available for each objects
DARFAS01> stats list instances
Instances for object name: cpx
        total
Instances for object name: rquota
        rquota_cpu0
        rquota_cpu1
        rquota_total
Instances for object name: aggregate
        aggr0
Instances for object name: audit_ng
Instances for object name: cifs
        cifs
Note: Basically what instances means is that for each object (example-volume), there are specific instances available for you to check the counters for.

Step 4:
For example: To find out the instances for the object volume.
DARFAS01> stats list instances volume
Instances for object name: volume
        vol0
        vfiler1_darfp01_root
        vfiler1_darfp01_nas
        vfiler2_darfp02_root
        vfiler2_darfp02_nas
        EUEV01_FAS_ISCSI_SATA11
As we can see on my test fieler, I have these instances availables. In other words, under object VOLUME, I can check counters against the volumes availables on my test filer.

Step 5:
Lets measure the counter againt the objects for specific instance
OBJECT:INSTANCE:COUNTER

Lets apply this format into command. Before that I want to know which counters are available for my chosen object i.e VOLUME
DARFAS01> stats list counters volume
Counters for object name: volume
        instance_name
        node_name
        instance_uuid
        vserver_name
        vserver_uuid
        avg_latency
        total_ops
        read_data
        read_latency
        read_ops
        write_data
        write_latency
        write_ops
        other_latency
        other_ops


Step 6:
Start system statistics gathering in the background, using identifier "MyStats", display the values while gathering continues, then stop gathering and display final values:

filer> stats start -I MyStats volume

To see the results while DISK I/O copy is in progress.

filer> stats show -I MyStats (Note:This will display results for all the volumes)

To seee the results for specific volume, say 'vol_cifs_vfiler'

filer> stats start -I MyStats volume:vol_cifs_vfiler

Then stop gathering and display final values:

filer> stats show -I MyStats
StatisticsID: MyStats
volume:vol_cifs_vfiler:instance_name:vol_cifs_vfiler
volume:vol_cifs_vfiler:node_name:
volume:vol_cifs_vfiler:instance_uuid:2ba581c0-1ea0-11e2-9c8f-123478563412
volume:vol_cifs_vfiler:vserver_name:
volume:vol_cifs_vfiler:vserver_uuid:
volume:vol_cifs_vfiler:avg_latency:17.67us
volume:vol_cifs_vfiler:total_ops:0/s
volume:vol_cifs_vfiler:read_data:0b/s
volume:vol_cifs_vfiler:read_latency:0us
volume:vol_cifs_vfiler:read_ops:0/s
volume:vol_cifs_vfiler:write_data:0b/s
volume:vol_cifs_vfiler:write_latency:0us
volume:vol_cifs_vfiler:write_ops:0/s
volume:vol_cifs_vfiler:other_latency:17.67us
volume:vol_cifs_vfiler:other_ops:0/s


 

Tuesday, 18 September 2012

SnapMirror-2-Tape for LOW bandwidth

SnapMirror-2-Tape for LOW bandwidth

SMTape is a high performance disaster recovery solution from Data ONTAP that backs up blocks of data to tape. It is Snapshot copy-based backup to tape feature. This feature is available only in the Data ONTAP 8.0 7-Mode or later releases.

You can use SMTape to perform volume backups to tapes. However, you cannot perform a backup at the qtree or subtree level. Also, you can perform only a level-0 backup and not incremental backups.
When you perform an SMTape backup, you can EITHER specify the name of the Snapshot copy to be backed up to tape. When you specify a Snapshot copy for the backup, all the Snapshot copies older than the specified Snapshot copy are also backed up to tape. If do not specify any snapshot, smtape creates a base snapshot copy to be used later for tape seeding.

What tape seeding is ?

Tape seeding is an SMTape functionality that helps you intialize the destination storage system in a volume SnapMirror relationship.


Consider a scenario in which you want to establish a SnapMirror relationship between a source system and a destination system over a low-bandwidth connection. Incremental mirroring of Snapshot copies from the source to the destination is feasible over a lowband width connection. However, an initial mirroring of the base Snapshot copy would take a long time over a low-bandwidth connection. In such a case, you can perform an SMTape backup of the source volume to a tape and use the tape to transfer the initial base Snapshot copy to the destination. You can then set up incremental SnapMirror updates to the destination system using the low-bandwidth connection.


Test it out on - Ontap SIMULATOR  

 
Simulator provides simulated tape devices which can be used to simulate smtape feature.
 
filerA>sysconfig -t
or
filerA>storage stats tape
 
This will list the test tape drives bundled with ontap.
 
You can perform an SMTape backup and restore using NDMP-compliant backup applications or using the Data ONTAP 8.0 7-Mode smtape backup and smtape restore CLI commands.
 
Steps:
1. snapmirror 2 tape:






 

2. Restore to volume (seeding back > destination volume)


te



3. Resume incremental snapmirror update to destination volume






Note: In this demo, I have used both snapmirror souce & destination volumes on the same filer but the logic remains same.

Tuesday, 5 July 2011

How to edit configuration files using "wrfile" command in NetApp DataONTAP.

DataONTAP does not include an editor such as 'vi' in most standard UNIX like distributions. In DataONTAP, to edit any file from the console you need to use "wrfile" command, but this command can either overwrite or with -a option can append to a file. Hence, in order to edit/modify the file you need a workaround. However, if you have setup CIFS/NFS then you can easily edit any files using your favorite editor such as word.


Following example shows steps to edit the file from the DataONTAP console.

1. Open telnet/rsh Or ssh to the filer console (with putty for example)

2. Type

Filer>rdfile /etc/rc (For example)

Or whatever file you want to edit. It will print out the current contents of the file.

Note: To be on the safer side, make a copy of the file before editing the current one.  (Use CIFS/NFS or ndmpcopy to to create a backup)

3. Copy the content of the rdfile output to a Notepad or other text editor.

To do that: Click the the telnet/putty's top left side corner. Menu will drop down: Edit > Mark and then Edit > copy & paste to the notepad.

4. Edit/change anything you want in the notepad.

5. When you're done, type

Filer>wrfile /etc/rc

6. press enter

7. Then QUICKLY copy/paste your modified text into the telnet/SSH console

8. Press CTRL-C to save the file and you're done.

Try "rdfile" again to ensure the changes were correctly saved.


Note: If you forget to do CTRL-C at the end you will remain in "wrfile" mode and everything you type will end up in the file you tried to edit. Therefore make sure you press CTRL-C at the end.

Warning:
Any time you make significant changes to your systems, you should be making a config dump file (and using the logger command to record the changes). Some customers schedule a weekly config dump and copy the files to an external system. In the event of an issue with the root volume or corruption of the system configuration, the controller can be restored back to its last known good state. Enterprise customers who order multiple storage controllers clone entire systems by copying and editing the dump file with a text editor. What a config restore will not do is create aggregates and volumes or tell you their size. Refer to a recent AutoSupport message for this information.

Config command:
filer> config dump -v 26Oct2012.cfg (you can use any filename you want, e.g., Initial_setup.config)
filer> config restore 26Oct2012.cfg

Logger command:
The logger command can insert text comments into the system log file /etc/messages
Example:
The logger command accepts either a text string or a stream of text to standard input terminated by a period (.)
filer> logger *** Making changes to /etc/rc file, backup is made –  Username * * *
filer> logger *** Starting shelf firmware upgrade –  Username * * *
filer> logger----- > System going down for UPS system maintenance < ----System is expected to halt ungracefully while we test battery duration- >

Basically whenever you make changes, do these:
1. config dump (Backup the system configuration before you start and when you finish)
2. Logger command (At a minimum, add comments to the system log when you start and finish the maintenance)
3. Save the console output to a file. (Forensic evidence of what you did, or did not do, and how the system responded)
4. Trigger AutoSupports (Send an ASUP message at the start and end of your maintenance)
.