Nimble Storage Fibre Channel & VMware ESXi Setup

This post will cover the integration of a Nimble Storage Fibre Channel array in a VMware ESXi environment. The steps are fairly similar to integrating a Nimble iSCSI array but there are some more FC specific settings which need to be set.

First, go ahead and create a new volume on your array. Go to Manage -> Volumes and click on New Volume. Specify the Volume Name, Description and select the proper Performance Policy for proper block alignment. Next, select the initiator group which has the information of your ESXi host. If you don’t have an initiator group yet, click on New Initiator Group.

Create Volume FC

Name your new initiator group and specify the WWNs of your ESXi hosts. This will allow your hosts to connect to the newly created volume.
Also, specify a unique LUN ID. In this case, I have assigned LUN ID 87.

Screen Shot 2014-11-20 at 8.58.41 PM


Screen Shot 2014-11-20 at 9.41.54 PM

Next, specify the size and reservation settings for the volume.

Screen Shot 2014-11-20 at 8.59.45 PM

Specify any protection schedule if required and click on Finish to create the volume.

Screen Shot 2014-11-20 at 9.31.13 PM

Now, the volume is created on the array and your initiators are set-up to allow connection from your FC HBA on the host to connect.
After a rescan of the FC HBA, I can see my LUN with the ID 87.

Screen Shot 2014-11-20 at 9.40.31 PM


Looking at the path details for LUN 87, you can 8 paths (2HBA’s x 4 Target Ports). The PSP should be set to NIMBLE_PSP_DIRECTED.
I have 4 Active(I/O) paths and 4 Standby paths. The Active(I/O) paths are going to the active controller and the Standby paths are for the standby controller.

Screen Shot 2014-11-20 at 9.49.51 PM


On the array I can now see all 8 paths under Manage -> Connections.

Screen Shot 2014-11-20 at 9.53.17 PM

The volume can now be used as a Raw Device Map or a Datastore. Those were all steps required to get your FC array connected to the an ESXi host, once your zones on your FC switches are configured.


Some of the images have been provided by Rich Fenton, one of Nimble’s Sales Engineer from UK.

Nimble Storage Fibre Channel Array Setup

Since Nimble Storage introduced Fibre Channel, I’m sure that many of our customers and prospects want to use their new FC array.
In this post, I will cover how to setup your new FC array and indicate what has changed in the setup manager as well as in the WebUI of the array.

All Fibre Channel arrays will be shipped with the Nimble OS version as it is the first OS which supports FC.
Once you have unpacked your new array, racked and cabled it, power it on. For the initial setup, you will need the Nimble Setup Manager on your local machine. The Nimble Setup Manager is part of the Nimble Windows Toolkit and can be downloaded from InfoSight. If you do not have an InfoSight login yet, please register as a new user.

Note: You will need your array serial number to register successfully.

After you started the Nimble Setup Manager, it will find your storage array and ask you to accept the EULA.
Next, you will be asked if you want to add this array to an existing group or set it up as  a standalone array.


In this setup, we decided to not join an existing group. Specify the array & group name and some additional management settings and hit next.



In the next screen you have to specify your subnet labels. Since this is a Fibre Channel array, you do not need to specify a data subnet. However, we have chosen to create a data subnet dedicated for replication.


Finally, we can see our actual FC ports and as you hover over each single FC port, you can see the operational speed.
By the way, don’t forget to set your diagnostic IPs. Those come in handy if you ever have to engage Support.


The next screen should look familiar again as it is the same for every Nimble Storage array. Specify the domain name and your DNS server.


Also, this screen should look familiar. Nothing has changed here. Specify your time zone and a NTP server.


This is the final step for the initial setup. Make sure to setup an unauthorized SMTP relay on your mail server for your new array.
Also, please check the box for Send event data to Nimble Storage Support. A lot of Nimble’s case automation and pro-active wellness relies on email alerts.
If you think you don’t need email alerts and all this pro-active wellness stuff, watch this video and see what you’ll miss out on. I highly recommend enabling those alerts!

Additionally, make sure Autosupport is enabled and works. Autosupport data is also playing a big role in Nimble’s pro-active wellness & InfoSight.
Once you are done, hit Finish and your array is ready for some action.


Go to Manage -> Arrays and select your array name. It will open this part of the WebUI and you can see your Ethernet and FC ports as well as the usual details.


Heading over to Administration -> Network Configuration. Select the active configuration and select the Interfaces-Tab. Here, you can see all your FC port including their WWPNs and the WWNN.
For the guys new to FC, WWPN = World Wide Port Name & WWNN = World Wide Node Name.


Additional to the new Interfaces-Tab, Nimble Storage also changed the Initiator Group UI in order to accommodate FC Initiators/WWPNs.


All images have been provided by Rich Fenton, one of Nimble’s Sales Engineer from UK.

ESXi Fibre Channel Configuration Maximums

Today we ran into a issue where we could not see all the LUNs presented to our hosts. In this case, we had multiple Nimble arrays connected already. We added 6 more LUNs from one of the Nimbles to our initiator group that is going to the ESXi host. We checked the logged in initiators, the initiator group, and the zones and EVERYTHING appeared to be connected correctly. We have 4 initiators coming out of the ESXi host and all 4 were logging into the fabric, array, and showing connections to the LUNs.  However… not all of the LUNs were showing up under storage adapters on the host.  This is what we saw:

HBA1 sees 0, 1, 2 and 3

HBA2 sees 0, 1 and 2

HBA3 sees 0, 1, 2, 3, 4 and 5

HBA4 sees 0, 1 and 2

Needless to say… That was kind of odd since everything was showing up as logged in. We rescanned multiple times and we restarted the vSphere Client for good measure.  Eventually, we ran the command:  esxcli storage core adapter rescan –all from the command line.  When we did THAT… the system spit out a bunch of errors that were pretty close to this:

The maximum number of supported paths of 1024 has been reached. Path vmhba2:C0:T0:L3 could not be added.
The number of paths allocated has reached the maximum: 1024. Path: vmhba5:C0:T6:L28 will not be allocated. This warning won’t be repeated until some paths are removed.

 If you look here:

You will see that ESXs supports a MAXIMUM of 1024 paths per SERVER.  This is not a per ADAPTER thing…

We’ve seen some of the ESX Maximums in the past with iSCSI but usually we hit the device limit long before we hit the path limit.  

So two things to learn from this:

  • Be aware that the vSphere Client won’t always be verbose about what’s going on.
  • Be aware of the ESXi Configuration Maximums. The way they have it is that whichever limit you hit first wins. It’s not a this, this, and this sorta thing.  It’s a this or this or this kind of thing.

Nimble Storage Fibre Channel

On Monday, November 17th, Nimble Storage announced official Fibre Channel support. This is another big milestone achieved by the team at Nimble. Fibre Channel will be available for the CS300, CS500 and CS700. The CS210 and CS215 will not be supported.



Below are some screenshots of what’s new in the WebUI:


As you can see above, fc9 and fc10 are the FC ports on both controllers. Hovering over those ports will show the location of them.



Additionally, the Initiator Groups have been modified to accommodate WWPNs


Adding new WWPNs is as easy as we’re used to it from the iSCSI initiator.


Overall, nothing special has been added to the WebUI. Everything has been kept simple as we expected and as we love the nimble device. This is most certainly opening up a bigger market share for Nimble Storage.

Over the next days, I’ll provide more details about Nimble’s FC integration