Global namespace enables the NAS clients to access data scattered across different physical location using single volume namespace. The share could originate from any node, any aggregate or any path.
Simplest definition of the term 'namespace' in Cluster Ontap : A namespace is a logical grouping of different volumes joined together at junction points to create a 'single logical volume'.
Limitation with Ontap 7-mode:
Shares are physically mapped to server name or IP address - Difficult to scale-out and complex to remember and manage thousands of volumes, as there are no ways to join the volumes to the rootvolume.
Advantage of c-mode/Cluster ontap/ just ONTAP with version 9:
In Cluster-Mode, NAS clients can use a single NFS mount point or CIFS share to access a namespace of potentially thousands of volumes. The root volume for a Vserver namespace contains the paths where the data volumes are junctioned into the namespace.
In Cluster-Mode, ONTAP can create a very large data container, a single namespace, for many volumes.
NAS clients can access data anywhere in the namespace using a single NFS mount point or CIFS share.
What it means to end-users : User does not have to remember all the names & location of the shares, they just need to remember the SVM name.
What it means to Storage Admin: Easy to scale-out, can easily enlarge the container, Non-disruptive upgrade and Non-disruptive operation.
Security wise: Storage Admins can secure each volume share so that it is only available to the legitimate consumers.
Global namespace : Is basically achieved by joining the volumes together with junctions.
1. The root volume of a Vserver serves as the entry point to the namespace provided by that Vserver.
All other 'Data' volume shares sitting on different aggregates from different Nodes gets junctioned at rootvolume '/', like a directories. As shown in the example:
2. In the unlikely event that the root volume of a Vserver namespace is unavailable, NAS clients cannot access the namespace hierarchy and therefore cannot access data in the namespace.
WARNING: If the rootvolume is offline, then CIFS or NFS shares access is stopped. This is where LoadSharing mirror comes-in.
For this reason, it is a NetApp best practice to create a load-sharing mirror for the root volume on each node of the cluster so that the namespace directory information remains available in the event of a node outage or failover.
Example for creating a Load Sharing mirror.
cluster_ontap_9::> snapmirror create -source-path //NFS_N2/NFS_N2_root -destination-path //NFS_N2/NFS_N2_root_mirror -type LS -schedule hourly
[Job 609] Job succeeded: SnapMirror: done
cluster_ontap_9::*> snapmirror show -type LS
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
cluster_ontap_9://NFS_N2/NFS_N2_root
LS cluster_ontap_9://NFS_N2/NFS_N2_root_mirror
Uninitialized
Idle - - -
cluster_ontap_9::*> snapmirror initialize-ls-set -source-path
CIFS_ONTAP9_N1: CIFS_ONTAP9_N1:<volume> CIFS_ONTAP9_N2:
CIFS_ONTAP9_N2:<volume> NFS_N2: NFS_N2:<volume>
cluster_ontap_9::*> snapmirror initialize-ls-set -source-path //NFS_N2/NFS_N2_root
[Job 611] Job is queued: snapmirror initialize-ls-set for source "cluster_ontap_9://NFS_N2/NFS_N2_root".
cluster_ontap_9::*> snapmirror show -type LS
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
cluster_ontap_9://NFS_N2/NFS_N2_root
LS cluster_ontap_9://NFS_N2/NFS_N2_root_mirror
Snapmirrored
Idle - true -
cluster_ontap_9::*>
Simplest definition of the term 'namespace' in Cluster Ontap : A namespace is a logical grouping of different volumes joined together at junction points to create a 'single logical volume'.
Limitation with Ontap 7-mode:
Shares are physically mapped to server name or IP address - Difficult to scale-out and complex to remember and manage thousands of volumes, as there are no ways to join the volumes to the rootvolume.
Advantage of c-mode/Cluster ontap/ just ONTAP with version 9:
In Cluster-Mode, NAS clients can use a single NFS mount point or CIFS share to access a namespace of potentially thousands of volumes. The root volume for a Vserver namespace contains the paths where the data volumes are junctioned into the namespace.
In Cluster-Mode, ONTAP can create a very large data container, a single namespace, for many volumes.
NAS clients can access data anywhere in the namespace using a single NFS mount point or CIFS share.
What it means to end-users : User does not have to remember all the names & location of the shares, they just need to remember the SVM name.
What it means to Storage Admin: Easy to scale-out, can easily enlarge the container, Non-disruptive upgrade and Non-disruptive operation.
Security wise: Storage Admins can secure each volume share so that it is only available to the legitimate consumers.
Global namespace : Is basically achieved by joining the volumes together with junctions.
1. The root volume of a Vserver serves as the entry point to the namespace provided by that Vserver.
All other 'Data' volume shares sitting on different aggregates from different Nodes gets junctioned at rootvolume '/', like a directories. As shown in the example:
2. In the unlikely event that the root volume of a Vserver namespace is unavailable, NAS clients cannot access the namespace hierarchy and therefore cannot access data in the namespace.
WARNING: If the rootvolume is offline, then CIFS or NFS shares access is stopped. This is where LoadSharing mirror comes-in.
For this reason, it is a NetApp best practice to create a load-sharing mirror for the root volume on each node of the cluster so that the namespace directory information remains available in the event of a node outage or failover.
Example for creating a Load Sharing mirror.
cluster_ontap_9::> snapmirror create -source-path //NFS_N2/NFS_N2_root -destination-path //NFS_N2/NFS_N2_root_mirror -type LS -schedule hourly
[Job 609] Job succeeded: SnapMirror: done
cluster_ontap_9::*> snapmirror show -type LS
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
cluster_ontap_9://NFS_N2/NFS_N2_root
LS cluster_ontap_9://NFS_N2/NFS_N2_root_mirror
Uninitialized
Idle - - -
cluster_ontap_9::*> snapmirror initialize-ls-set -source-path
CIFS_ONTAP9_N1: CIFS_ONTAP9_N1:<volume> CIFS_ONTAP9_N2:
CIFS_ONTAP9_N2:<volume> NFS_N2: NFS_N2:<volume>
cluster_ontap_9::*> snapmirror initialize-ls-set -source-path //NFS_N2/NFS_N2_root
[Job 611] Job is queued: snapmirror initialize-ls-set for source "cluster_ontap_9://NFS_N2/NFS_N2_root".
cluster_ontap_9::*> snapmirror show -type LS
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
cluster_ontap_9://NFS_N2/NFS_N2_root
LS cluster_ontap_9://NFS_N2/NFS_N2_root_mirror
Snapmirrored
Idle - true -
cluster_ontap_9::*>
No comments:
Post a Comment