Friday, 26 July 2013

Understanding the difference between traditional cluster Disk Vs. Cluster Shared Volumes (CSV)  ?

I touched base with Windows after a substantial gap, and that happened during the course of MCSE 2012 Server Infrastructure certification, I got introduced to various new features and technologies that were introduced in Server 2012, and one of which I  liked the most is the improvement in CSV 2.0. The whole concept of CSV and it's benefits over the traditional cluster made it more exciting to read.

CSV stands for Cluster Shared Volumes (CSV) is a feature of Failover Clustering first introduced in Windows Server 2008 R2 for use with the Hyper-V role. A Cluster Shared Volume is a shared disk containing an NTFS volume that is made accessible for read and write operations by all nodes within a Windows Server Failover Cluster.

In Windows Server 2012, its further improved and turned into a FULL BLOWN file system. Just like a standard file system, it is now compatible with filter driver (supports applications such as antivirus/backup etc.).

Going back to the main subject :
To understand how Cluster Shared Volumes (CSV) works in a failover cluster, it is helpful to review how a traditional cluster works without CSV. In a traditional cluster , a failover cluster allows a given disk (LUN) to be accessed by only one node at a time. Given this constraint, each Hyper-V virtual machine in the failover cluster requires its own set of LUNs in order to be migrated or fail over independently of other virtual machines. In this type of deployment, the number of LUNs must increase with the addition of each virtual machine, which makes management of LUNs and clustered virtual machines more complex.

In contrast, on a failover cluster that uses CSV, multiple virtual machines that are distributed across multiple cluster nodes can all access their Virtual Hard Disk (VHD) files at the same time, even if the VHD files are on a single disk (LUN) in the storage. The clustered virtual machines can all fail over independently of one another.

I am still reading some of these stuff and there are plenty already covered in various blogs and on Microsoft site. I guess, all I can do is provide some pointers to these reference material.

What is the role of SMB in Hyper-V Cluster based on CSV?
Cluster Shared Volumes operates by orchestrating metadata I/O operations between the nodes in the cluster via the Server Message Block protocol.

What is Coordinator Node ?
The node with 'ownership of the LUN' orchestrating metadata updates to the NTFS volume.

Advantage of CSV over traditional cluster  comes during LIVE MIGRATION.
CSV reduces the potential disconnection period at the end of the migration since the NTFS file system does not have to be unmounted/mounted as is the case with a traditional cluster disk.

What is 'single name space' term used in CSV?
CSV builds a common global namespace across the cluster using NTFS reparse point. Volumes are accessible under the %SystemDrive%\ClusterStorage root directory from any node in the cluster.

How do I create CSV?
There is nothing different which needs to be done for CSV.  CSV support iSCSI, Fibre Channel and Serial Attached SCSI (SAS) for storage.  CSV will work with any of these, so long as the disk is using NTFS as the file system.

Benefits:
CSV will provide many benefits, including easier storage management, greater resiliency to failures, the ability to store many VMs on a single LUN and have them fail over individually, and most notably, CSV provides the infrastructure to support and enhance live migration of Hyper-V virtual machines.

With CSV, you can use live migration to move VMs from a Hyper-V host that needs maintenance to another Hyper-V host. Then when the maintenance is complete, you can move the VMs back to the original host—all with no interruption of end-user services. Live migration also enables you to build a dynamic datacenter that can respond to high resource-utilization periods by automatically moving VMs to hosts with greater capacities; thereby enabling a VM to meet Service Level Agreements and provide end users with high levels of performance, even during periods of heavy resource utilization.

Useful links:
http://blogs.msdn.com/b/clustering/archive/2009/02/19/9433146.aspx

Hyper-V R2 CSV FAQ:
http://en.community.dell.com/techcenter/virtualization/w/wiki/3021.aspx

http://windowsitpro.com/windows-server-2012/windows-server-2012-shared-storage-live-migration

http://technet.microsoft.com/en-us/library/jj612868.aspx

 

Tuesday, 5 February 2013

What is IOM6E?

What is IOM6E in NetApp storage systems.

IOM6, is SAS 2.0 compliant IO Module used in newer NetApp storage systems. SAS-connected disk shelves now account for about 10% of the NetApp installed base and for more than 50% of the storage shipped with new NetApp® systems as of 2011 as a result of technology transition from FC-AL to SAS.This is happening because SAS offers better reliability and resiliency, greater bandwidth, and greatly improved connectivity

So, What does "E" stands for in IOM6E.  Well, there is no official doc from NetApp that says 'E' stands for embeded, but I am guessing so.

About IOM6E:
  • IOM6E is embedded version of IOM6. 
  • The ACPP in the IOM6E runs on the same CPU as the Service Processor (SP). ACP Ethernet traffic is internally isolated from SP traffic.
  • The external Ethernet RJ45 port with the locked-wrench symbol is used exclusively for ACP Ethernet traffic.
  • ACPP in IOM6E provides similar functionality as the ACPP in IOM3/IOM6, but with few exceptions:
a. ACPP is part of the SP in FAS2240/2/4 systems, and will be updated only through the SP's firmware update. In other words, when you upgrade SP, ACPP is automatically updated.
 
b. Unlike IOM3/IOM6, some ACP status is available via SP console CLI.
 
Therefore, your controller may send False alert regarding IOM6E ACP update.When you click on the alert it will take you to the download page, but there is none for ACP firmware. It appears that it is  known bug and will be fixed soon. No action is needed as of now.
 
For more information, please see the following NetApp community post:

Thursday, 25 October 2012

NetApp "stats" command

NetApp "stats" command:

Step 1:
List the available measureable objects on the filer.
filer> stats list objects
Objects:
        cpx
        rquota
        aggregate
        audit_ng
        cifs
        disk
        dump
        ext_cache_obj
        ext_cache
        fcp
        hostadapter
        ifnet
        iscsi_conn
        iscsi_lif
        iscsi
        logical_replication_destination
        logical_replication_source
        lun
        ndmp
        nfsv3
        processor
        qtree
        quota
        raid
        spinhi
        system
        target
        vfiler
        volume
        wafl
        avoa

Step 2:
Find out list of counters available for the objects listed in step1
stats list counters
filer> stats list counters
Counters for object name: cifs
instance_name
node_name
cifs_ops
cifs_latency
cifs_read_ops
cifs_write_ops
Counters for object name: disk
instance_name
node_name
instance_uuid
display_name
raid_name
raid_group
raid_type
disk_speed

To list counters for specific object ?

filer>stats list counters volume
Counters for object name: volume
        instance_name
        node_name
        instance_uuid
        vserver_name
        vserver_uuid
        avg_latency
        total_ops
        read_data
        read_latency
        read_ops
        write_data
        write_latency
        write_ops
        other_latency
        other_ops


Step 3:
Find out the specific instances available for each objects
DARFAS01> stats list instances
Instances for object name: cpx
        total
Instances for object name: rquota
        rquota_cpu0
        rquota_cpu1
        rquota_total
Instances for object name: aggregate
        aggr0
Instances for object name: audit_ng
Instances for object name: cifs
        cifs
Note: Basically what instances means is that for each object (example-volume), there are specific instances available for you to check the counters for.

Step 4:
For example: To find out the instances for the object volume.
DARFAS01> stats list instances volume
Instances for object name: volume
        vol0
        vfiler1_darfp01_root
        vfiler1_darfp01_nas
        vfiler2_darfp02_root
        vfiler2_darfp02_nas
        EUEV01_FAS_ISCSI_SATA11
As we can see on my test fieler, I have these instances availables. In other words, under object VOLUME, I can check counters against the volumes availables on my test filer.

Step 5:
Lets measure the counter againt the objects for specific instance
OBJECT:INSTANCE:COUNTER

Lets apply this format into command. Before that I want to know which counters are available for my chosen object i.e VOLUME
DARFAS01> stats list counters volume
Counters for object name: volume
        instance_name
        node_name
        instance_uuid
        vserver_name
        vserver_uuid
        avg_latency
        total_ops
        read_data
        read_latency
        read_ops
        write_data
        write_latency
        write_ops
        other_latency
        other_ops


Step 6:
Start system statistics gathering in the background, using identifier "MyStats", display the values while gathering continues, then stop gathering and display final values:

filer> stats start -I MyStats volume

To see the results while DISK I/O copy is in progress.

filer> stats show -I MyStats (Note:This will display results for all the volumes)

To seee the results for specific volume, say 'vol_cifs_vfiler'

filer> stats start -I MyStats volume:vol_cifs_vfiler

Then stop gathering and display final values:

filer> stats show -I MyStats
StatisticsID: MyStats
volume:vol_cifs_vfiler:instance_name:vol_cifs_vfiler
volume:vol_cifs_vfiler:node_name:
volume:vol_cifs_vfiler:instance_uuid:2ba581c0-1ea0-11e2-9c8f-123478563412
volume:vol_cifs_vfiler:vserver_name:
volume:vol_cifs_vfiler:vserver_uuid:
volume:vol_cifs_vfiler:avg_latency:17.67us
volume:vol_cifs_vfiler:total_ops:0/s
volume:vol_cifs_vfiler:read_data:0b/s
volume:vol_cifs_vfiler:read_latency:0us
volume:vol_cifs_vfiler:read_ops:0/s
volume:vol_cifs_vfiler:write_data:0b/s
volume:vol_cifs_vfiler:write_latency:0us
volume:vol_cifs_vfiler:write_ops:0/s
volume:vol_cifs_vfiler:other_latency:17.67us
volume:vol_cifs_vfiler:other_ops:0/s


 

Tuesday, 18 September 2012

SnapMirror-2-Tape for LOW bandwidth

SnapMirror-2-Tape for LOW bandwidth

SMTape is a high performance disaster recovery solution from Data ONTAP that backs up blocks of data to tape. It is Snapshot copy-based backup to tape feature. This feature is available only in the Data ONTAP 8.0 7-Mode or later releases.

You can use SMTape to perform volume backups to tapes. However, you cannot perform a backup at the qtree or subtree level. Also, you can perform only a level-0 backup and not incremental backups.
When you perform an SMTape backup, you can EITHER specify the name of the Snapshot copy to be backed up to tape. When you specify a Snapshot copy for the backup, all the Snapshot copies older than the specified Snapshot copy are also backed up to tape. If do not specify any snapshot, smtape creates a base snapshot copy to be used later for tape seeding.

What tape seeding is ?

Tape seeding is an SMTape functionality that helps you intialize the destination storage system in a volume SnapMirror relationship.


Consider a scenario in which you want to establish a SnapMirror relationship between a source system and a destination system over a low-bandwidth connection. Incremental mirroring of Snapshot copies from the source to the destination is feasible over a lowband width connection. However, an initial mirroring of the base Snapshot copy would take a long time over a low-bandwidth connection. In such a case, you can perform an SMTape backup of the source volume to a tape and use the tape to transfer the initial base Snapshot copy to the destination. You can then set up incremental SnapMirror updates to the destination system using the low-bandwidth connection.


Test it out on - Ontap SIMULATOR  

 
Simulator provides simulated tape devices which can be used to simulate smtape feature.
 
filerA>sysconfig -t
or
filerA>storage stats tape
 
This will list the test tape drives bundled with ontap.
 
You can perform an SMTape backup and restore using NDMP-compliant backup applications or using the Data ONTAP 8.0 7-Mode smtape backup and smtape restore CLI commands.
 
Steps:
1. snapmirror 2 tape:






 

2. Restore to volume (seeding back > destination volume)


te



3. Resume incremental snapmirror update to destination volume






Note: In this demo, I have used both snapmirror souce & destination volumes on the same filer but the logic remains same.

Tuesday, 5 July 2011

How to edit configuration files using "wrfile" command in NetApp DataONTAP.

DataONTAP does not include an editor such as 'vi' in most standard UNIX like distributions. In DataONTAP, to edit any file from the console you need to use "wrfile" command, but this command can either overwrite or with -a option can append to a file. Hence, in order to edit/modify the file you need a workaround. However, if you have setup CIFS/NFS then you can easily edit any files using your favorite editor such as word.


Following example shows steps to edit the file from the DataONTAP console.

1. Open telnet/rsh Or ssh to the filer console (with putty for example)

2. Type

Filer>rdfile /etc/rc (For example)

Or whatever file you want to edit. It will print out the current contents of the file.

Note: To be on the safer side, make a copy of the file before editing the current one.  (Use CIFS/NFS or ndmpcopy to to create a backup)

3. Copy the content of the rdfile output to a Notepad or other text editor.

To do that: Click the the telnet/putty's top left side corner. Menu will drop down: Edit > Mark and then Edit > copy & paste to the notepad.

4. Edit/change anything you want in the notepad.

5. When you're done, type

Filer>wrfile /etc/rc

6. press enter

7. Then QUICKLY copy/paste your modified text into the telnet/SSH console

8. Press CTRL-C to save the file and you're done.

Try "rdfile" again to ensure the changes were correctly saved.


Note: If you forget to do CTRL-C at the end you will remain in "wrfile" mode and everything you type will end up in the file you tried to edit. Therefore make sure you press CTRL-C at the end.

Warning:
Any time you make significant changes to your systems, you should be making a config dump file (and using the logger command to record the changes). Some customers schedule a weekly config dump and copy the files to an external system. In the event of an issue with the root volume or corruption of the system configuration, the controller can be restored back to its last known good state. Enterprise customers who order multiple storage controllers clone entire systems by copying and editing the dump file with a text editor. What a config restore will not do is create aggregates and volumes or tell you their size. Refer to a recent AutoSupport message for this information.

Config command:
filer> config dump -v 26Oct2012.cfg (you can use any filename you want, e.g., Initial_setup.config)
filer> config restore 26Oct2012.cfg

Logger command:
The logger command can insert text comments into the system log file /etc/messages
Example:
The logger command accepts either a text string or a stream of text to standard input terminated by a period (.)
filer> logger *** Making changes to /etc/rc file, backup is made –  Username * * *
filer> logger *** Starting shelf firmware upgrade –  Username * * *
filer> logger----- > System going down for UPS system maintenance < ----System is expected to halt ungracefully while we test battery duration- >

Basically whenever you make changes, do these:
1. config dump (Backup the system configuration before you start and when you finish)
2. Logger command (At a minimum, add comments to the system log when you start and finish the maintenance)
3. Save the console output to a file. (Forensic evidence of what you did, or did not do, and how the system responded)
4. Trigger AutoSupports (Send an ASUP message at the start and end of your maintenance)
.
 

Friday, 26 March 2010

I have bought 160GB USB Drive, but why I only get a disk size of 149 GB, where is the remaining 11 GB gone ?

Well, the answer lies in how the bytes are interpreted. Disk Manufacturers use "Decimal" as a base for calculation (i.e Base 10).


10^3 = 1000 bytes = 1 KiloByte , 10^6 = 1000 KB = 1 MegaByte , 10^9 = 1000 MB = 1 GigaByte

Whereas, the actual utilization is calculated using "Binary" as a base for calculation (i.e Base 2).

2^10 = 1024 bytes = 1 KiloByte , 2^20 = 1024 KB = 1 MegaByte, 2^30 = 1024 MB = 1 GigaByte

Note: Hard Disk Manufacturers will use 1000, not 1024 as base.


In this case, my USB drive has total bytes equal to = 160,039,239,690 Bytes. And we know 1 KB = 1024 Bytes, 1MB = 1024 Kilobytes, 1GB = 1024 Megabytes.

Hence, the total actual capacity comes to = 160,039,239,690 / 1024 x 1024 x 1024 = 149 GB.

In general, the capacity of a hard disk can be calculated using this formula:

Total Size of the Disk (Bytes) = (Cylinders) X (Heads) X (Sectors) X (Bytes per Sector)

Tuesday, 9 March 2010

Virtual Floppy in a Virtual World!!

Until a few years back (probably a decade), as far as I remember there were not many known options but to test on the bare metal physical box and re-image every-time its screwed due to some stupid work of experimentation. With the advancement of virtualization everything seems possible now, I am glad that i got exposed to 'VMware' few years back, i am really impressed with their long range of vmware products, i think what they have provisioned is truly revolutionary.

Today one can play, emulate , crash , rebuild and learn as and when you want it without actually entering the IT hardware labs or a desktop machine at your home. One can install vmware products on the laptop (of course you need to ensure the minumum system requirements that each vmware products demands) and turn your laptop into a mobile testing lab.

There are no worries about re-imaging in case something going wrong and your system comes down crashing . In other words, without changing the physical state of your box (PC) you can now setup your own test environments. There is loads of information on the net about virtualization, you may also visit the vmware website to obtain more information about virtualization and vmware products. I chose 'vmware workstaion' product for my testing environment, and have been using it for past 2 years and i am really happy with its usage.



However, my objective here is to show one of the useful feature in vmware called "virtual floppy" drive. If you are thinking that floppy disks are "dead", right ? Well, you are absolutely correct that the physical floppy 3.5" disks (or physical floppy disks of any size) are not used anymore. In fact none of the PC vendors are providing this provision anymore but they are replaced with what is called "virtual floppy drives".

They provide many advantages over traditional floppy drives. Some of the advantages of using virtual floppies are:
  • Ability to boot OS and applications.
  • Ability to transfer files between systems .
  • Does not get damaged, as there is no physical state.
  • Can even be sent as attachment over the internet.

In this article, i will show you how to use virtual floppy drives with VMware Workstation product.


Whether you are a system admin, student, or from Quality Assurance department , you would be presented with scenarios wherein you will be required to test certain application, applicaiton feature(s) or at the least you want to try out few experimental stuff for learning purpose. One of the most important learning steps in System Admin's life is to learn to recover system from crash. More often than not, floppies comes handy in such rescue operations, especially if the system's MBR is currupted and you are unable to boot the system.


You must be wondering even if i have a virtucal floppy drive on my vmware workstation, how do i actually get the virtual floppies to work with. To do this , all you need is a vmware workstaiton runing any flavour of Unix or Windows OS as a gues operating system and a 'notepad' in the as we know in Windows.

Steps to create and mount 'virtual floppy' on your vmware workstaion:


1. Right click on the desktop and create a 'notepad'
2. Rename the "notepad" to any name, in this example i have named it "virtual-floppy", and ofcourse we need to change the extension of the notepad to "*.flp", this is the standard image format that vmware understands.
3. Go to your vmware workstaion, click on edit settings, click on the floppy drive, if it's not there then add it using "add" option under Hardware. Click 'browse to select the image we just created, in this case "virtual-floppy.flp".
4. Start the vmware workstaion, and wait untl it boots up to desktop screen (FYI: I am running Redhat linux as gues OS on vmware workstaion 5).
5. Now, we need to format the floppy with a filesystem and mount it.

The most commonly used tool is : mkfs
mkfs ("make a filesystem") is the standard Unix command for formatting a disk partition with a specific filesystem.

The basic syntax is:
mkfs -t type device , where type is the type of the filesystem and device is the device the filesystem will reside on.
The most commonly used option is -t, which is followed by the type of filesystem to be created. If this option is not used, the default is ext2 (second extended filesystem). Among the other types of filesystems that can be created are ext3, minix, msdos, vfat and xfs.


As an example, the following would create an ext2 filesystem on a formatted floppy disk that has been inserted into the first floppy drive:
mkfs /dev/fd0

The following would be used to create a vfat (i.e., Microsoft Windows-compatible) filesystem on the floppy disk.

mkfs -t vfat /dev/fd0
We will go for the "-vfat" option as this is both Unix and Windows compatible.

Now that our floppy is formatted and ready, we can copy files to it as if, it is physical floppy drive. The most importnat need of of floppy that i can think of is during emergency or when your system is crahsed and you need to get it back somehow. The fastest way to get your system back is to have a 'bootable floppy' handy with you.


When it comes to bootable floppy, there are lot of boot loaders, but GRUB stands out, perhaps the best bootloader among all. Let's install the GRUB on the floppy that we formatted in the last steps. For that, we need to mount the floppy, create a folder by name 'boot' and 'grub' and then copy grub files (stage 1 & 2) from the local disk to floppy disk grub folder.

mkdir -p /floppy/boot/grub
cp /usr/local/share/grub/i386-pc/stage* /floppy/boot/grub
Or
cp /boot/grub/stage* /floppy/boot/grub

Note: Always ensure the correct path to the grub folder, it may be differnt on your system.

Finally, we need to install the grub on the Floppy disk.
Start the executable at the Linux command prompt by typing :
grubenter the following series of commands at the grub prompt:
grub> root (fd0)
grub> setup (fd0)
grub> quit
We are done, we have now created a bootable virtual-floppy to work with virtual machines on the vmware application.