VMware VVOL – Overview

Over the last couple of months, I tried to get up to speed with all the talk about VMware VVOL and how it will change the virtualization game. Below is a quick summary of what I think will give you a good overview of what will change:

VMWare VVOL Overview

VVOL will no longer use VMFS3 & VMFS5 and will replace datastores as we know them today. VVOL will not be pre-provisioned, unlike LUNs. VVOL are created when you create a VM, power on a VM, clone or snapshot a VM. VASA will be used to manage the underlying storage arrays through ESXi. For the users who are not familiar with VASA, vSphere APIs for Storage Awareness are used to create and provide storage for ESXi hosts on the storage array. Storage arrays will be partitioned into logical containers (Storage Containers). Note, Storage Containers are not LUNs.

Virtual Machine files configuration files will be stored on Storage Containers.
Additionally, main data services will be offloaded to the array. Storage policies can specify the capabilities of the underlying storage array. This allows to enable capabilities like snapshots, replication, deduplication and encryption on a per VM level and not a per datastore level.

VASA Provider (VP)
The VP is software component developed and provided by your storage vendor. The VP is used to present the array-side capabilities to the ESXi host which then can utilize them or assign them to Virtual Machines. The VASA Provider uses the vSphere APIs for Storage which are exported by ESXi. Most storage vendors will have the VP sitting on their controllers. However, VPs can also exist as a VMware Appliance.
Protocol Endpoint (PE)
Protocol Endpoints enable the communication between ESXi hosts and storage array. PEs are managed by the storage administrator and support FC, ISCSI, FCoE and NFS. New PEs will be discovered as part of the rescan process on a ESXi host.
Multi-pathing will work exactly the same way and existing multi-path policies can be applied.
Storage Containers
Storage Containers are not LUNs!
A single VVOL can use multiple storage containers. The capacity is based on the physical storage capacity on your array.
As mentioned before, Storage Policies will be used to represent the capabilities of the underlying array. Storage Policies can be assigned to Storage Containers or VMs. Multiple PEs can access the same Storage Container and every array has at least on Storage Container. Storage Container will remind most people about datastores.
Difference between LUN and Storage Containers?
LUN:
  • Fixed size
  • Needs a FileSystem
  • Can only apply storage capabilities on all VMs provisioned in that LUN
  • Managed by FIleSystem commands
Storage Containers:
  • Size based on array capacity
  • Max number of Storage Containers depends only on the array’s ability
  • Can dynamically be resized (expand & shrink)
  • Can distinguish between storage capabilities for different VMs (Virtual Volumes) provisioned on the same SC
 A detailed overview of VVOL and everything else which will be new, watch this video, by Paudie O’Riordan.

Nimble Storage Fibre Channel & VMware ESXi Setup

This post will cover the integration of a Nimble Storage Fibre Channel array in a VMware ESXi environment. The steps are fairly similar to integrating a Nimble iSCSI array but there are some more FC specific settings which need to be set.

First, go ahead and create a new volume on your array. Go to Manage -> Volumes and click on New Volume. Specify the Volume Name, Description and select the proper Performance Policy for proper block alignment. Next, select the initiator group which has the information of your ESXi host. If you don’t have an initiator group yet, click on New Initiator Group.

Create Volume FC

Name your new initiator group and specify the WWNs of your ESXi hosts. This will allow your hosts to connect to the newly created volume.
Also, specify a unique LUN ID. In this case, I have assigned LUN ID 87.

Screen Shot 2014-11-20 at 8.58.41 PM

 

Screen Shot 2014-11-20 at 9.41.54 PM

Next, specify the size and reservation settings for the volume.

Screen Shot 2014-11-20 at 8.59.45 PM

Specify any protection schedule if required and click on Finish to create the volume.

Screen Shot 2014-11-20 at 9.31.13 PM

Now, the volume is created on the array and your initiators are set-up to allow connection from your FC HBA on the host to connect.
After a rescan of the FC HBA, I can see my LUN with the ID 87.

Screen Shot 2014-11-20 at 9.40.31 PM

 

Looking at the path details for LUN 87, you can 8 paths (2HBA’s x 4 Target Ports). The PSP should be set to NIMBLE_PSP_DIRECTED.
I have 4 Active(I/O) paths and 4 Standby paths. The Active(I/O) paths are going to the active controller and the Standby paths are for the standby controller.

Screen Shot 2014-11-20 at 9.49.51 PM

 

On the array I can now see all 8 paths under Manage -> Connections.

Screen Shot 2014-11-20 at 9.53.17 PM

The volume can now be used as a Raw Device Map or a Datastore. Those were all steps required to get your FC array connected to the an ESXi host, once your zones on your FC switches are configured.

 

Some of the images have been provided by Rich Fenton, one of Nimble’s Sales Engineer from UK.

ESXi Fibre Channel Configuration Maximums

Today we ran into a issue where we could not see all the LUNs presented to our hosts. In this case, we had multiple Nimble arrays connected already. We added 6 more LUNs from one of the Nimbles to our initiator group that is going to the ESXi host. We checked the logged in initiators, the initiator group, and the zones and EVERYTHING appeared to be connected correctly. We have 4 initiators coming out of the ESXi host and all 4 were logging into the fabric, array, and showing connections to the LUNs.  However… not all of the LUNs were showing up under storage adapters on the host.  This is what we saw:

HBA1 sees 0, 1, 2 and 3

HBA2 sees 0, 1 and 2

HBA3 sees 0, 1, 2, 3, 4 and 5

HBA4 sees 0, 1 and 2

Needless to say… That was kind of odd since everything was showing up as logged in. We rescanned multiple times and we restarted the vSphere Client for good measure.  Eventually, we ran the command:  esxcli storage core adapter rescan –all from the command line.  When we did THAT… the system spit out a bunch of errors that were pretty close to this:

The maximum number of supported paths of 1024 has been reached. Path vmhba2:C0:T0:L3 could not be added.
The number of paths allocated has reached the maximum: 1024. Path: vmhba5:C0:T6:L28 will not be allocated. This warning won’t be repeated until some paths are removed.

 If you look here:

 http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf

You will see that ESXs supports a MAXIMUM of 1024 paths per SERVER.  This is not a per ADAPTER thing…

We’ve seen some of the ESX Maximums in the past with iSCSI but usually we hit the device limit long before we hit the path limit.  

So two things to learn from this:

  • Be aware that the vSphere Client won’t always be verbose about what’s going on.
  • Be aware of the ESXi Configuration Maximums. The way they have it is that whichever limit you hit first wins. It’s not a this, this, and this sorta thing.  It’s a this or this or this kind of thing.

OpenStack & Nimble Storage ITO feature

Nimble Storage’s Cinder Driver includes a new feature called ITO – Image Transfer Optimization.

With most cinder backends, every time you deploy a new instance from an image, a volume/LUN gets created on the backend storage.
This means you might potentially use up a lot of space for data which is redundant.
In order to avoid unnecessary duplication of data, Nimble Storage introduced ITO – Image Transfer Optimization.

ITO will be helpful in cases where you might want to create 20 instances at a time from the same ISO.
With ITO, only one volume with the ISO will be created and then zero-copy clones will be utilized in order to boot the other 19 instances.
This seems to be the most space efficient way for deploying instances.

The benefits are simple:

  • Instant Copy
  • No duplicated data
  • Shared Cache

Below, you can see the workflow for deploying instances without ITO enabled:
no_ito

And here with ITO enabled:

ito_enabled

 

Thanks to @wensteryu for the images.

OpenStack & Nimble Storage – Cinder multi backend

This post describes how to setup a Nimble Storage array within a cinder multi backend configuration, running OpenStack Icehouse.
If you are new to OpenStack or cinder, you might be asking why should you run single-backend vs multi-backend.

Basically, single-backend means you are using a single storage array/single group of arrays as your backend storage. In a multi backend configuration, you might have storage arrays from multiple vendors or you might just have different Nimble Storage arrays which provide several levels of performance. For example, you might want to use your CS700 as high-performance storage and your CS220 as a less performance intensive storage.

  1. Upload your Cinder driver to /usr/lib/python2.6/site-packages/cinder/volume/drivers
  2. Add theNimbleCinderparameters to /etc/cinder/cinder.conf as a new section
    #Nimble Cinder Configuration 
    [nimble-cinder]
    san_ip=management-ip
    san_login=admin_user
    san_password=password
    volume_driver=cinder.volume.drivers.nimble.NimbleISCSIDriver
  3. Add [nimble-cinder] to enabled-backend.Ifenabled-backends does not yet exist in your cinder.conf file, please add the following line:
    enabled_backends=nimble-cinder,any_additional_backends
  4. Create a new volume type for the nimble-cinder backend:
    root@TS-Training-OS-01 nova(keystone_admin)]# cinder type-create nimble1
  5. Next, link the backend name to the volume type:
    root@TS-Training-OS-01 nova(keystone_admin)]# cinder type-key nimble1 set volume_backend_name=nimble-cinder
  6. Restartcinder-api, cinder-scheduler and cinder-volume
    root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-scheduler restart
          Stopping openstack-cinder-scheduler:                       [  OK  ]
          Starting openstack-cinder-scheduler:                       [  OK  ]
          [root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-api restart
          Stopping openstack-cinder-api:                             [  OK  ]
          Starting openstack-cinder-api:                             [  OK  ]
          [root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-volume restart
          Stopping openstack-cinder-volume:                          [  OK  ]
          Starting openstack-cinder-volume:                          [  OK  ]
  7. Create a volume either via Horizon or the CLI
    root@TS-Training-OS-01 nova(keystone_admin)]# cinder create --volume_type nimble1 --display-name test_volume 50
          +---------------------+--------------------------------------+
          |       Property      |                Value                 |
          +---------------------+--------------------------------------+
          |     attachments     |                  []                  |
          |  availability_zone  |                 nova                 |
          |       bootable      |                false                 |
          |      created_at     |      2014-11-05T18:23:54.011013      |
          | display_description |                 None                 |     
          |     display_name    |             test_volume              |
          |      encrypted      |                False                 |
          |          id         | 6cce44ad-a71f-4973-b862-aefe9c5f0a79 |
          |       metadata      |                  {}                  |
          |         size        |                  50                  |
          |     snapshot_id     |                 None                 |
          |     source_volid    |                 None                 |
          |         status      |                creating              |
          |     volume_type     |                nimble1               |
          +---------------------+--------------------------------------+
  8. Verify the volume has successfully been created
     [root@TS-Training-OS-01 nova(keystone_admin)]# cinder list
     Screen Shot 2014-11-11 at 5.03.29 PM
  9. Verify the creation of the volume on your storage array. Go to Manage -> Volumes
    openstack_array