Friday 19 October 2018

SnapCenter 4.1 & Enterprise Level ORACLE 12 c and ONTAP 9.4 [ Practical Information ]

  SnapCenter Latest version : 4.1 (Supports ONTAP 9.4)



Oracle Host : [192.168.0.26]
centos-release-7-5.1804.4.el7.centos.x86_64

Oracle DB: [Standalone, Enterprise Edition Release 12.2.0.1.0]
Mounted on NetApp FlexVol via NFS - mount point on linux:  /mnt/ontap-nfs

NetApp SnapCenter Server: [192.168.0.10 : Port 8146]
Windows 2012R2

Plug-in/Agent:
snapcenter-plugin-oracle & netapp-snapcenter-plugin-unix

How to install the plug-in
You can use either SnapCenter Server GUI to push the plug-in or use the manual local installation on the Linux host (I prefer this method). I always had issues when pushing any sort of agent from Windows to Linux, and I found there is no need to break your head and waste your time, just go for local installation, it always works.

Please note : For Oracle environment hosted on NFS (Like in this demo) or an in-guest iSCSI initiator, you don't need to install 'Plug-in for VMware vSphere', all you need is just :netapp-snapcenter-plugin-oracle & netapp-snapcenter-plugin-unix, they get installed together.

For local installation of plug-in, follow these steps:
1. Go to this location on the SnapCenter server: [Windows]
C:\ProgramData\NetApp\SnapCenter\Package Repository
2. Copy this file to the Linux host.
'snapcenter_linux_host_plugin.bin'
3. Once it is available on the Linux host, simply run this command:
./SnapCenter_linux_host_plugin.bin –i swing
(Make sure you make the file 'executable' after copying it over to Linux, you can use chmod 755)

This will bring up the installation wizard as shown below.





Note: I always find this interesting on 'linux' based system (Redhat/Centos): Whenever NetApp products such as SnapDrive for UNIX or SnapCenter complains that the 'Host OS' is not supported, all you have to do is just fool the Software by changing the redhat-release text in this file [/etc/redhat-release] to whatever is supported by their Software. When I installed SnapDrive it complained that the OS is not supported (as it was the latest Centos 7 build), so I altered the text in the /etc/redhat-release file to redhat 5.5,  and it worked. When I tried to install the plug-in on Linux Host for Oracle DB, it again said not supported (due to older version) so I changed it back to the original latest version, and it worked again. Well, you have this liberty to do these tricks on a 'test' setup, but on a Production environment you would rather make sure its compatible, and when in doubt, always reach out to NetApp Support.

How does SnapCenter server interacts with 'Oracle' DB running on linux host?

Interaction happens via two components:
1) SnapCenter Server                [Windows] = SMCore service
2) SnapCenter Plug-in loader    [Linux  ]    = SPL (SnapCenter plug-in loader)

SMCore (Port 8145) =  coordinates with the Linux Plug-In (managed by the SnapCenter plug-in loader (SPL)) to perform Oracle database workflows. 

SPL = Runs on the Oracle database Host that loads and runs the Oracle plug-in. SMCore coordinates with SPL to perform Oracle data protection workflows like quiesce/unquiesce, backup, RMAN catalog, mount, restore, and clone.

You can verify the status of SPL on linux:

[oracle@redhat ~]$ service spl status
Redirecting to /bin/systemctl status spl.service
● spl.service - SnapCenter Plugin Loader
   Loaded: loaded (/etc/systemd/system/spl.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-10-23 11:08:14 BST; 2h 18min ago
  Process: 754 ExecStart=/opt/NetApp/snapcenter/spl/bin/spld start (code=exited, status=0/SUCCESS)


As part of the Oracle Backup requirement:
Put the oracle DB in archive Mode [Otherwise Backup will fail]

SQL> connect as sysdba
Enter user-name: sysdba
Enter password: 
Connected.
SQL> 
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount exclusive
ORACLE instance started.

Total System Global Area 1996488704 bytes
Fixed Size     8622336 bytes
Variable Size   587206400 bytes
Database Buffers 1392508928 bytes
Redo Buffers     8151040 bytes
Database mounted.
SQL> alter database archivelog;
Database altered.

SQL> alter database open;
Database altered.

SQL> archive log start;
Statement processed.
SQL> archive log list;
Database log mode        Archive Mode
Automatic archival        Enabled
Archive destination        /mnt/ontap-nfs/u01/app/oracle/product/12.2.0/dbhome_1/dbs/arch
Oldest online log sequence     3
Next log sequence to archive   5
Current log sequence        5
SQL> exit


Next, make a connection to the SnapCenter Server and discover the local Oracle Database:

[oracle@redhat bin]$ ./sccli Open-SmConnection
INFO: A connection session will be opened with SnapCenter 'https://TEST.LAB.COM:8146/'.
Enter the SnapCenter user name: lab\administrator
Enter the SnapCenter password: 
INFO: A connection session with the SnapCenter was established successfully.


[oracle@redhat bin]$ ./sccli Get-SmResources
INFO: Using localhost 'redhat.lab.com' as default host for discovering resources.
                                                                                                                                                            
===============================================================
|  Name  |  Version     |  Id                              |  Type                            |  Overall Status  |
==============================================================
|  orcl     |  12.2.0.1.0  |  redhat.lab.com\orcl  |  Oracle Single Instance  |                        |
===============================================================

INFO: The command 'Get-SmResources' executed successfully.

[oracle@redhat bin]$./sccli Configure-SmOracleDatabase -AppObjectId <appObject Id> -DatabaseRunAsName <user>

Once this is done, go back to the SnapCenter Server GUI [Windows] : You will see the plugins installed successfully.




Just follow 'Backup' workflow instructions to create a Protection Policy and configure other parameters such as snapshot naming, scheduling, etc etc, just do what it says. There is plenty of help inside the tool to help you figure out. Finally, click 'Back now'.



If the job is successful, you can view the 'Oracle' snapshots on the NetApp SVM:
Open SystemManager GUI: Navigate to SVM, Volume and then view snapshots, and you should see two snapshots there :
1. Data
2. Log



There are more options on the panel which I haven't tested yet, will give it a try in coming days.


That's it for now!! For more info  read: 4.1 Documentation

Wednesday 17 October 2018

SnapDrive for UNIX Overview 5.3.1P1: [Quick practical overview]

SnapDrive for UNIX Overview:

SnapDrive for UNIX is an enterprise-class storage and data management utility that simplifies storage management and increases the availability and reliability of application data. Its key functionality includes error-free application storage provisioning, consistent-data NetApp Snapshot copies, and rapid application recovery. It also provides the ability to easily manage data that resides on NetApp Network File System (NFS) shares or NetApp LUNs. SnapDrive for UNIX complements the native file system and volume manager and integrates seamlessly with the clustering technology supported by the host operating system (OS).

My lab details:
[root@redhat oracle]# snapdrived status
Snapdrive Daemon Version    : 5.3.1P1  (Change 4326080 Built 'Wed May 24 23:40:39 PDT 2017')
Snapdrive Daemon start time : Tue Oct 16 22:40:38 2018
Total Commands Executed     : 4

Host:
Linux redhat.lab.com 3.10.0-862.14.4.el7.x86_64


  • SnapDrive uses default parameters unless the values in the following config file are un-commented:

Config file: /opt/NetApp/snapdrive/snapdrive.conf


  • You can also run sdconfcheck to determine if underlying OS/filesystem drivers are compatible.

#/opt/NetApp/snapdrive/bin/sdconfcheck check



  • Once SnapDrive tool is installed (simple : rpm -ivh snadprive_package.rpm), the next step is to add the NetApp storage system [In this example, we are adding cDOT]


Please note: For cDOT, we have to add the 'VSERVER/SVM' IP, do not use Cluster_Mgmt IP. In the following example: I am using dedicated SVM mgmt IP, and 'vsadmin' account. SVM DNS name could point to multiple IP addresses, hence hard-code the entry in the host file on the Unix/Linux host which points to SVM Mgmt IP.

Following command adds the SVM to the SnapDrive  config for Data Management/Provision purpose:

1) [root@redhat ~]# snapdrive config set vsadmin 192.168.0.214
Password for vsadmin:
Retype password:
Mismatch between DNS entry of storage system svm_nfs_iscsi and system name SVM_NFS
Do you want to continue? (y/n) n
Abort the config by pressing 'n'.

2) Add the hostname to the /etc/hosts file.

[root@redhat ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.214 SVM_NFS

3) Ping it once to confirm its working.

[root@redhat ~]# ping svm_nfs
PING SVM_NFS (192.168.0.214) 56(84) bytes of data.
64 bytes from SVM_NFS (192.168.0.214): icmp_seq=1 ttl=64 time=0.430 ms
64 bytes from SVM_NFS (192.168.0.214): icmp_seq=2 ttl=64 time=0.299 ms

4) Now, add the the SVM using the HOSTNAME (Do not use Cluster_mgmt IP):
[root@redhat ~]# snapdrive config set vsadmin SVM_NFS
Password for vsadmin:
Retype password:
[root@redhat ~]# snapdrive config list
username    appliance name   appliance type
----------------------------------------------
vsadmin     SVM_NFS          StorageSystem
[root@redhat ~]#


SnapDrive has the 'Wizard' to walk you through creating a LUN/LVM since it's inception, which means you don't have to remember any commands, it will do everything for you. This is how it's run:

5) Create a LUN using Wizard: 
[root@redhat ~]# snapdrive storage wizard create

What kind of storage do you want to create {LUN, diskgroup, hostvol, filesys, ?}?[LUN]:
Getting storage systems configured in the host ...
Following are the available storage systems:
SVM_NFS
You can select the storage system name from the list above or enter a
new storage system name to configure it.
Enter the storage system name: SVM_NFS

Enter the storage volume name or press <enter> to list them: 

Following is a list of volumes in the storage system:
SMSQL_VOL_DATA      SMSQL_VOL_LOGS      SVM_NFS_root_LS1    testing_LUN_MOVE 
vol_09082018_114009_8vol_CIFS            vol_CIFS_SnapMirror_13092018_001828vol_NAS             vol_NAS_0         
vol_NFS             vol_SnapDrive       vol_redhat       

Please re-enter: vol_SnapDrive

You can provide comma separated multiple entity names e.g: lun1,lun2,
lun3 etc.
Enter the LUN name(s): lun_SD

Checking LUN name(s) availability. Please wait ...

Enter the LUN size for LUN(s) in the below mentioned format. (Default 
unit is MB)
<size>k:m:g:t
Where, k: Kilo Bytes   m: Mega bytes    g: Giga Bytes     t: Tera Bytes
Enter the LUN size: 2g

Configuration Summary:

Storage System      : SVM_NFS
Volume Name        : /vol/vol_SnapDrive
LUN Name            : lun_SD
LUN Size            : 2048.0 MB

Equivalent CLI command is:
snapdrive storage create -lun SVM_NFS:/vol/vol_SnapDrive/lun_SD -lunsize 2048.0m

Do you want to create storage based on this configuration{y, n}[y]?: y

Creating storage with the provided configuration. Please wait...

LUN SVM_NFS:/vol/vol_SnapDrive/lun_SD to device file mapping => /dev/sdd, /dev/sde

Do you want to create more storage {y, n}[y]?: n
[root@redhat ~]#

To confirm if the SnapDrive provisioned LUN is actually created, verify it with 'multipath -ll' command.

[root@redhat ~]# multipath -ll
3600a09807770457a795d4d4179475475 dm-3 NETAPP  ,LUN C-Mode   
size=2.0G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 3:0:0:1 sdd 8:48 active ready running
  `- 4:0:0:1 sde 8:64 active ready running

You can run the following command to check the LUN path & other attributes such as 'ALUA':

[root@redhat ~]# snapdrive lun showpaths
Connected LUNs:
lun path device filename asymmetric access state
-------- --------------- -----------------------
SVM_NFS:/vol/vol_SnapDrive/lun_SD /dev/sdd Optimized
SVM_NFS:/vol/vol_SnapDrive/lun_SD /dev/sde Optimized
[root@redhat ~]#

Monday 15 October 2018

Difference between physical WWPNs (7-mode) vs virtual WWPNs (cDOT or ONTAP)?

Physical  WWPNs  = Is a concept of 7-mode
Virtual*  WWPNs    = Is a concept of cDOT/ONTAP

*= By virtual it means 'Logical InterFace' or simply LIF, a term that is only applicable to cDOT. One of the major difference between 7-mode & cDOT is taking virtualization to the END Ports such as Physical Adapters be it a NIC or HBA. In 7-mode IP was tied to Physical NIC, and WWPN was tied to the Physical FC HBAs. However, in cDOT it's virtualized by introducing the concept of 'LIF', an abstraction on top of the physical (physical ports). In 7-mode we have VIF, which is also called Virtual Interface, but that is where the similarity ends. The objective of VIF was to support 'Link Aggregation' that's it, it was never truly virtualized the way LIF is in cDOT.

LIF : Is logical and can have either IP or WWPN.

When you open system manager GUI, under section Network : If you click on 'Network Interface' you can note the Logical WWPN and if you click on 'FC/FCoE' you will see Physical WWPN.

Zoning : Do not include 'Physical WWPN', only 'Virtual WWPN'

DO NOT use Physical WWPNs beginning with '50:0a:09:8x' they no longer present a SCSI target service and should not be included in any zone configurations on the FC fabric (though they show as logged in to the fabric).

DO use only virtual WWPNs (WWPNs starting with 20:).

IMPORTANT: NPIV is required for Fibre Channel LIFs to operate correctly. ONTAP uses N_Port ID virtualization (NPIV) to permit every logical interface to log in to an FC fabric with its own worldwide port name (WWPN). What NPIV does is allow a single physical N_Port to have multiple WWPNs, and therefore multiple N_Port_IDs, associated with it.

What is the advantage of Virtual/Logical WWPN: This allows a host (Physical or virtual) connected to the same FC fabric to communicate with the same 'SCSI target=LUN' regardless of which physical node owns which LIF.

Ensure NPIV is enabled on the Switch:
CISCO:
# show npiv status
NPIV is enabled

Brocade:
admin> portcfgshow
NPIV capability

TR-4080 has more details.

Saturday 13 October 2018

ONTAP HA pairs take-over and give-back time estimation in seconds

These are estimation for reference purpose only:


Migrating LUNs using NetApp 7MTT and where does FLI (Foriegn LUN Import) comes handy.

Take-away: 7MTT is a tool for Data Migration of NAS & SAN (LUNs) between NetApp Storage Arrays only. It is designed by NetApp engineering to help professional services and customers in migrating their data from 7-mode hardware to cDOT (Now called ONTAP). This tool is necessary b'cos NetApp is only pulling resources for cDOT, 7-mode core-ontap code production is stopped, 8.2.x, was the the end of it.  Data ONTAP 8.3 and later do not include a 7-Mode version.

Also, it makes sense for any existing 7-mode NetApp customers to move to cDOT, as that is where all the R&D efforts lies.


Can 7MTT migrate LUN?
Yes, it can.


When it cannot?
If the LUN is in 32-bit aggregate. Pre-check in 7MTT tool will not let you go ahead if it finds LUNs hosted on 32-bit aggr, following error will be seen:






What is the remedy?
Convert 32-bit to 64-bit aggregates prior to transitioning to clustered Data ONTAP using 7MTT.


What if my LUN is mis-aligned in 7-mode?
7MTT cannot correct the mis-alignment, it will inherit it.


Which tool can I use to fix the mis-alignment, instead of copying data into newly aligned LUN?
FLI (Foreign LUN Import)


What's cool about FLI?
As the name suggest 'Foreign' LUN Import : It not only PULLs LUNs from 3rd-Party Storage Array, but also between the 7-mode & cDOT (starting Data ONTAP 8.3.1) NetApp Arrays, and can also do the LUN alignment plus auto-transition from a 32-bit to 64-bit aggregate, without having to do it first as in the case of 7MTT. Further, with ONTAP 9.1, even AFF has FLI support enabled by-default.


What protocol FLI supports?
Strictly FC only. It can do iSCSI LUNs as well, but for that the LUN protocol must be changed to iSCSI to FC and then you should be able to do it.

DEMO: Unfortunately, I only have iSCSI in my LAB, hence I cannot do the demo, but if you do have access to FC setup in your lab, then please give it a try.


For more info on :Foreign LUN Import 7-Mode to Clustered Data ONTAP Transition Workflow, Please read this TR-4380, it has all the details.

Thursday 11 October 2018

Interesting discovery : SMSQL migration from 7-mode to ONTAP 9.4 (cDOT) using 7MTT

As SQL is an application database sitting on top of the volume/LUN and has it's own VSS writer with in the Microsoft VSS framework, hence its important that it's migrated using an application aware migration tool such as - NetApp SMSQL.

I wrote this post last month and showed the steps required for migrating SQL data from 7-mode to cDOT, which works perfectly fine. However I was intrigued to find out what if I use 7MTT tool to achieve the same result ?


Findings : Used 7MTT and it worked perfectly fine, still amused though. So, this is what I am going to elaborate here tonight.


What made it interesting:
1. I created 'test' db using SQL Management studio and created one table and inserted one row. This DB is initially on the local disk.
2. I created two volumes on the 7-mode filer : One for data and one for logs + snap-info.
3. I launched SnapDrive, added the 7-mode filer and created 3 virtual disk : Data, logs & snap-info. Data on a separate volume and logs & snap-info on the same volume but having separate LUNs.
4. I launched SMSQL console and moved the 'test' DB to 7-mode hosted LUNs : Primary data to Data drive, Logs to Logs drive and snap-info to separate drive.
5. I ran a first FULL backup with VERIFICATION, all came up good.


Here comes the 7MTT Tool:
6. Opened services.msc and stopped : SnapDrive & SnapManager services.
7. Launched 7MTT 3.3.1 tool and selected : Data & Logs volume for migration to cDOT SVM.
8. Migration (Final completion) happened successfully and it turned source Volume offline as I had chosen this option during final completion.
9. Opened system-manager GUI and launched cDOT filer and made sure the two Volumes now showed up, along with 3 LUNs. Please note 7MTT tool not only migrates the LUNs, it also maps them automatically (If chosen to be),  I had checked by mistake, but it's ok. We just need to make sure we dis-associate them from the igroup.


Now, I must re-connect the same drives (3 LUNS that are migrated from 7-m to cDOT):
10. I went to cDOT system-manager and went to those 3 LUNs one by one and un-checked the mapping from the igroup.
11. I started the SnapDrive service and launched the SnapDrive management tool - I re-connected the 3 drives using the same 'Drive Letters' as it was with 7-m. However, the disk-signature had changed now, but will it matter ?

Finally:
12. I launched the SMSQL tool, and noticed it says 'test' DB is recovering..its in a hung state.
13. I was expecting this, as it was not a true application level migration and SMSQL must be wondering if these are the same disks ?
14. I didn't bother much to troubleshoot 'recovering...' error, as I knew I had taken a 'good back' post migration to 7-mode filer.
15. I used the restore wizard within SMSQL tool and restored the 'test' db from the  last known good back. To my surprise, after the successful restore, when I clicked on the 'Backup' icon it didn't complain this time and it showed the drives correctly  as it was with 7-m, except disk signatures.
16. Ran another full back and it worked perfectly. So, basically I have the working SQL 'test' DB hosted on my cDOT (Migrated usign 7MTT) plus the snapshots that were shipped as part of the snamirror replication process, which is the default mechanism for 'copy' based backup within 7MTT.

Tested:
17. I opened SQL Mgmt studio, selected 'test' DB and ran a query on table and it showed the row that I had populated in that table. Hmm...This is very interesting now, b'cos I was expecting the 'restore' to fail, as you remember this backup was made on the drives hosted on the 7-m LUNs, looks like 'disk signature' did not really matter to it. SQL database, snapdrive luns and snapshots are all working nicely.
18. Don't know if anyone has attempted this method on 'test' environment ?

I will be doing few more tests on the around migration concept, and will feed you back on the findings.

Sunday 7 October 2018

Unable to verify the graphical display setup. This application requires X display.

Issue : While installing Oracle DB 18c, this error continued to frustrate me, asking for x11 display, tried different suggestions from google but finally one suggestion worked.

Error is seen when you try to launch oracle installation..

[oracle@redhat stage]$ ./runInstaller

ERROR: Unable to verify the graphical display setup. This application requires X display. Make sure that xdpyinfo exist under PATH variable. No protocol specified. Can't connect to X11 window server using ':0.0' as the value of the DISPLAY variable.

Solution that worked in this case:

[root@redhat stage]# yum install xorg-x11-xauth xterm
Package 1:xorg-x11-xauth-1.0.9-1.el7.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package xterm.x86_64 0:295-3.el7 will be installed
--> Processing Dependency: libXaw.so.7()(64bit) for package: xterm-295-3.el7.x86_64
--> Running transaction check
---> Package libXaw.x86_64 0:1.0.13-4.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================
 Package                         Arch                            Version                                Repository                     Size
============================================================================================================================================
Installing:
 xterm                           x86_64                          295-3.el7                              base                          455 k
Installing for dependencies:
 libXaw                          x86_64                          1.0.13-4.el7                           base                          192 k

Transaction Summary
============================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 647 k
Installed size: 1.7 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): libXaw-1.0.13-4.el7.x86_64.rpm                                                                                | 192 kB  00:00:00   
(2/2): xterm-295-3.el7.x86_64.rpm                                                                                    | 455 kB  00:00:00   
--------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                       1.1 MB/s | 647 kB  00:00:00   
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libXaw-1.0.13-4.el7.x86_64                                                                                               1/2
  Installing : xterm-295-3.el7.x86_64                                                                                                   2/2
  Verifying  : libXaw-1.0.13-4.el7.x86_64                                                                                               1/2
  Verifying  : xterm-295-3.el7.x86_64                                                                                                   2/2

Installed:
  xterm.x86_64 0:295-3.el7                                                                                                               

Dependency Installed:
  libXaw.x86_64 0:1.0.13-4.el7                                                                                                           


[root@redhat stage]# ssh -Y oracle@redhat.lab.com

Back to non-root user: Run the installer again..

[oracle@redhat stage]$ ./runInstaller

This time it worked, Installation poped-up.

After extending LUN size from NetApp side, fdisk shows new size but dm-multipath shows old size

Observation:
This is probably already known to most Linux & NetApp administrators, but I discovered so, and hence I am making a note of it in this article.

Steps:
1. Increased LUN size on NetApp side.
2. Ran the iscsi --rescan
3. fdisk shows - new size but, 'multipath -ll' still shows old size
4. dm-multipath which sits between kernel and block devices needs a 'bounce' to see the new size.
5. After running 'service multipathd restart' and running 'multipath -ll' showed the new size correctly.

Saturday 6 October 2018

Configuring Multipath for redhat (centos7-3.10.0-862.el7.x86_64) in simple straight forward steps for ISCSI LUN on ONTAP 9

Straight forward steps from installing multi-path module to configuring multipath.conf file as per NetApp ISCSI redhat recommendation:

[root@redhat ~]#  yum install device-mapper-multipath   [download the multipath module binary]
[root@redhat ~]# modprobe dm-multipath     [insert the module]
[root@redhat ~]# lsmod | grep dm_mod         [list the module to confirm]
dm_mod   123941  11 dm_multipath,dm_log,dm_mirror


One of the good feature of Linux is the ability to 'load' modules while the kernel is running: Each piece of code that can be added to the kernel at run-time is called a module. Each module is made up of object code that can be dynamically linked to the running kernel by the insmod (modprobe) program and can be unlinked by the rmmod program.

Next...
[root@redhat /]# mpathconf --enable --with_multipathd y

The command above will create the multipath.conf file, if it does not already exists.

Next...
Blacklist - Non-NetApp devices from multi-path probe:
If there are non-NetApp SCSI devices to exclude, enter the worldwide identifier (WWID) for the devices in the blacklist section of the multipath.conf file.

For example, if /dev/sda is the non-NetApp SCSI device that you want to exclude, you would enter the following:
[root@redhat /]# /lib/udev/scsi_id -gud /dev/sda
3600508e000000000753250f933cc4606 [Just an example, in your system it may be different, this you can copy and paste in the multipath.conf file:]

blacklist {
 wwid 3600508e000000000753250f933cc4606
 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
 devnode "^hd[a-z]"
 devnode "^cciss.*"
}

Next...
[root@redhat /]# rdloaddriver=scsi_dh_alua
[root@redhat /]# service multipathd restart

[root@redhat /]# service multipathd restart [bounce it]
Redirecting to /bin/systemctl restart multipathd.service
[root@redhat /]# service multipathd status  [use status command to ensure it's up correctly]
Redirecting to /bin/systemctl status multipathd.service
● multipathd.service - Device-Mapper Multipath Device Controller
   Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2018-10-08 20:19:15 BST; 8s ago
  Process: 49811 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
  Process: 49809 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
  Process: 49808 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
 Main PID: 49814 (multipathd)
    Tasks: 6
   CGroup: /system.slice/multipathd.service
           └─49814 /sbin/multipathd

[root@redhat ~]# multipath -ll
3600a09807770457a795d4d4179475456 dm-2 NETAPP  ,LUN C-Mode   
size=2.0G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 4:0:0:0 sdb 8:16 active ready running
  `- 3:0:0:0 sdc 8:32 active ready running
[root@redhat ~]#

Note: Make sure you have discovered & logged-into ISCSI target already, otherwise you will not see any output on 'multipath -ll' command. 2 Easy commands to do that are :

[root@redhat /]# iscsiadm -m discovery -t st -p 192.168.0.x:3260
[root@redhat /]# iscsiadm -m node -l

All done!!!