Friday 26 July 2013

Offloaded Data Transfers (ODX) support with NetApp

Microsoft Offloaded Data Transfer (ODX), also known as copy offload, enables direct data transfers within or between compatible storage devices without transferring the data through the host computer.

While standard reads and writes work well in most scenarios, but what if the data intending to be copied is located on virtual disks managed by the 'same Storage Array' in the backend. This means that the data is moved out of the array, onto a server, across a network transport, onto another server, and back into the same array once again. The act of moving data within a server and across a network transport can significantly impact the availability of those systems; not to mention the fact that the throughput of the data movement is limited by the throughput and availability of the network.

Support for ODX starts with Windows 2012 Server and Windows 8. Applications can now take advantage of these capabilities to help offload the process of data movement to the storage subsystem. Two new FSCTLs (FSCTL_OFFLOAD_READ & FSCTL_OFFLOAD_WRITE) are introduced starting with Windows 2012 Server and Windows 8 that facilitate a method of offloading the data transfer.

This shifts the burden of bit movement away from servers to bit movement that occurs intelligently within the storage subsystems. The best way to visualize the command semantics is to think of them as analogous to an unbuffered read and an unbuffered write.

Requirements for using ODX with NetApp Storage Array:
Data ONTAP version requirements
Clustered Data ONTAP 8.2 and later releases support ODX for copy offloads.

Related article on 'How to trace ODX transfer':
http://www.slideshare.net/AshwinPawar/odx-42682251

IMPORTANT:
For CIFS environment, SMB 3.0 support is only available in clustered Data ONTAP 8.2. Data ONTAP 8.2 running 7-mode does not support SMB 3.0.

Please note Data ONTAP supports ODX for both the CIFS and SAN protocols. The source can be either a CIFS server or LUNs, and the destination can be either a CIFS server or LUNs.

Requirements for using ODX with Windows server and client requirements
Starting with Windows 2012 Server and Windows 8.

Use cases for ODX
https://library.netapp.com/ecmdocs/ECMP1196784/html/GUID-BAD66DF1-2AB5-4CB2-BF53-068E4B4D94A3.html

Courtesy:
http://msdn.microsoft.com/en-us/library/windows/hardware/dn265282(v=vs.85).aspx#feedback

Understanding the difference between traditional cluster Disk Vs. Cluster Shared Volumes (CSV)  ?

I touched base with Windows after a substantial gap, and that happened during the course of MCSE 2012 Server Infrastructure certification, I got introduced to various new features and technologies that were introduced in Server 2012, and one of which I  liked the most is the improvement in CSV 2.0. The whole concept of CSV and it's benefits over the traditional cluster made it more exciting to read.

CSV stands for Cluster Shared Volumes (CSV) is a feature of Failover Clustering first introduced in Windows Server 2008 R2 for use with the Hyper-V role. A Cluster Shared Volume is a shared disk containing an NTFS volume that is made accessible for read and write operations by all nodes within a Windows Server Failover Cluster.

In Windows Server 2012, its further improved and turned into a FULL BLOWN file system. Just like a standard file system, it is now compatible with filter driver (supports applications such as antivirus/backup etc.).

Going back to the main subject :
To understand how Cluster Shared Volumes (CSV) works in a failover cluster, it is helpful to review how a traditional cluster works without CSV. In a traditional cluster , a failover cluster allows a given disk (LUN) to be accessed by only one node at a time. Given this constraint, each Hyper-V virtual machine in the failover cluster requires its own set of LUNs in order to be migrated or fail over independently of other virtual machines. In this type of deployment, the number of LUNs must increase with the addition of each virtual machine, which makes management of LUNs and clustered virtual machines more complex.

In contrast, on a failover cluster that uses CSV, multiple virtual machines that are distributed across multiple cluster nodes can all access their Virtual Hard Disk (VHD) files at the same time, even if the VHD files are on a single disk (LUN) in the storage. The clustered virtual machines can all fail over independently of one another.

I am still reading some of these stuff and there are plenty already covered in various blogs and on Microsoft site. I guess, all I can do is provide some pointers to these reference material.

What is the role of SMB in Hyper-V Cluster based on CSV?
Cluster Shared Volumes operates by orchestrating metadata I/O operations between the nodes in the cluster via the Server Message Block protocol.

What is Coordinator Node ?
The node with 'ownership of the LUN' orchestrating metadata updates to the NTFS volume.

Advantage of CSV over traditional cluster  comes during LIVE MIGRATION.
CSV reduces the potential disconnection period at the end of the migration since the NTFS file system does not have to be unmounted/mounted as is the case with a traditional cluster disk.

What is 'single name space' term used in CSV?
CSV builds a common global namespace across the cluster using NTFS reparse point. Volumes are accessible under the %SystemDrive%\ClusterStorage root directory from any node in the cluster.

How do I create CSV?
There is nothing different which needs to be done for CSV.  CSV support iSCSI, Fibre Channel and Serial Attached SCSI (SAS) for storage.  CSV will work with any of these, so long as the disk is using NTFS as the file system.

Benefits:
CSV will provide many benefits, including easier storage management, greater resiliency to failures, the ability to store many VMs on a single LUN and have them fail over individually, and most notably, CSV provides the infrastructure to support and enhance live migration of Hyper-V virtual machines.

With CSV, you can use live migration to move VMs from a Hyper-V host that needs maintenance to another Hyper-V host. Then when the maintenance is complete, you can move the VMs back to the original host—all with no interruption of end-user services. Live migration also enables you to build a dynamic datacenter that can respond to high resource-utilization periods by automatically moving VMs to hosts with greater capacities; thereby enabling a VM to meet Service Level Agreements and provide end users with high levels of performance, even during periods of heavy resource utilization.

Useful links:
http://blogs.msdn.com/b/clustering/archive/2009/02/19/9433146.aspx

Hyper-V R2 CSV FAQ:
http://en.community.dell.com/techcenter/virtualization/w/wiki/3021.aspx

http://windowsitpro.com/windows-server-2012/windows-server-2012-shared-storage-live-migration

http://technet.microsoft.com/en-us/library/jj612868.aspx