Nimble Storage Cinder Integration

In this post, I will only cover the Nimble Storage Cinder Integration for OpenStack Icehouse.

Some of you have already an OpenStack cloud in their environment and also own a Nimble Storage array, others might not have an OpenStack cloud yet but consider it. Nimble Storage just announced officially their OpenStack integration. Starting with Juno, the Nimble Cinder driver will be shipped with the OpenStack release. The actual approval and blue print can be found here. For Icehouse, you’ll need to download the driver from InfoSight or request it from support.

Follow these 6 steps to upload the Nimble Cinder driver, configure and test it:

Note: The steps below cover a single-backend configuration. A multi-backend configuration will be covered in a separate post.

  1. Upload your Cinder driver to /usr/lib/python2.6/site-packages/cinder/volume/drivers
  2. Add theNimbleCinderparameters to /etc/cinder/cinder.conf within the [DEFAULT] section
    #Nimble Cinder Configuration 
    san_ip=management-ip
    san_login=admin_user
    san_password=password
    volume_driver=cinder.volume.drivers.nimble.NimbleISCSIDriver
  3. Restartcinder-api, cinder-scheduler and cinder-volume
    root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-scheduler restart
          Stopping openstack-cinder-scheduler:                       [  OK  ]
          Starting openstack-cinder-scheduler:                       [  OK  ]
          [root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-api restart
          Stopping openstack-cinder-api:                             [  OK  ]
          Starting openstack-cinder-api:                             [  OK  ]
          [root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-volume restart
          Stopping openstack-cinder-volume:                          [  OK  ]
          Starting openstack-cinder-volume:                          [  OK  ]
  4. Create a volume either via Horizon or the CLI
    root@TS-Training-OS-01 nova(keystone_admin)]# cinder create --display-name test_volume 50
          +---------------------+--------------------------------------+
          |       Property      |                Value                 |
          +---------------------+--------------------------------------+
          |     attachments     |                  []                  |
          |  availability_zone  |                 nova                 |
          |       bootable      |                false                 |
          |      created_at     |      2014-11-05T18:23:54.011013      |
          | display_description |                 None                 |     
          |     display_name    |             test_volume              |
          |      encrypted      |                False                 |
          |          id         | 6cce44ad-a71f-4973-b862-aefe9c5f0a79 |
          |       metadata      |                  {}                  |
          |         size        |                  50                  |
          |     snapshot_id     |                 None                 |
          |     source_volid    |                 None                 |
          |         status      |                creating              |
          |     volume_type     |                 None                 |
          +---------------------+--------------------------------------+
  5. Verify the volume has successfully been created
     [root@TS-Training-OS-01 nova(keystone_admin)]# cinder list
     cinder-list
  6. Verify the creation of the volume on your storage array. Go to Manage -> Volumes
    openstack_array

 

Silicon Valley OpenStack Ops Meetup

Yesterday, I attended the Silicon Valley OpenStack Ops Meetup and held a troubleshooting session focusing on Cinder, Keystone and Nova. The event was hosted by Nimble Storage, ElasticBox and SwiftStack. The focus of this Meetup was to share tips and tricks.

The event was hosted at Nimble Storage’s campus in San Jose, CA.

Nimble_Storage_HQ211-281 River Oaks Pwky

Even though the San Francisco Giants were playing their first World Series game, roughly 100 people attended the event. I think this is a pretty good turnout for the first Silicon Valley OpenStack Ops Meetup.

20141021_182130  20141021_182138

I was lucky and gathered a slot together with Wen Yu to cover OpenStack Shared Storage and Troubleshooting Tips and Tricks. To be honest, I have never been this nervous before. This was the first time speaking in front of more than 20 people.

 

Agenda:

5:45 PM – Doors Open, Food Served, Meet and Greet
6:20 PM – Bill Borsari & Pat Darisme ( Meetup Organizers ),  Nimble Storage – Meet Up kickoff
6:30 PM – Ravi Srivatsav ( CEO ), ElasticBox – Avoiding cloud lock-in to give you total freedom to build, manage, and deploy applications faster than ever before.
6:50 PM – John Dickinson ( SwiftStack technical lead & OpenStack Swift PLM ), SwiftStack – Swift Product Line Manager talks about Object Storage and Swift in the Enterprise
7:10 PMWen Yu ( Nimble Product Manager ) & Jan Schwoebel ( Nimble Virtualization Support Lead ), Nimble Storage – OpenStack Shared Storage and TroubleShooting Tips and Tricks
7:30 PM – 9 PM – Meet the Presenters

  • Bill Bosari and Pat Darisme kicked off the event and welcomed all participants, who made it to the event even though the SF Giants had the first World Series Game.
  • Robin
    • OverviewofElasticBox
      • Mission: ElasticBox empowers business to innovate faster by making it insanely easy for IT, ops and developers to build, manage and deploy applications in the cloud
      • Architecture:
        • Build any application and host it within any, supported, cloud (Amazon, Google, VMware, OpenStack,…)
        • Seamlessly migrate applications from cloud to cloud, don’t be locked down to one cloud solution
        • Share applications and “boxes” with people
          • Boxes are a bundle of packages
  • John Dickinson – Slides can be found here 
    • What is Swift?
      • Swift is an Object Store
      • Great for unstructured data which grows and grows (Images, Videos, Documents,…)
    • What problem does Swift solve?
      • It is build for availability and durability
      • Users do no longer have to worry about where the data is located
      • Great manageability
      • Migrate data without any downtime for your users
    • HowdoesSwiftStack fit in?
      • Provides a manage and control center for Swift
      • Add two additional components, controller & gateway
      • Gateway is a SMB/CIFS and NFS server
      • SwiftStack will provide an all-day workshop in SF on October 28th. Details can be found here
  • Wen Yu
    • Value of Shared Storage
    • Nimble Cinder Features
    • ITO – Image Transfer Optimization.

Screen Shot 2014-10-21 at 3.02.37 PMScreen Shot 2014-10-21 at 3.06.26 PM Screen Shot 2014-10-21 at 3.02.54 PM Screen Shot 2014-10-21 at 3.03.01 PM Screen Shot 2014-10-21 at 3.03.13 PM Screen Shot 2014-10-21 at 3.03.24 PM Screen Shot 2014-10-21 at 3.02.44 PM

  • Jan Schwoebel
    • OpenStack Troubleshooting and Tips
    • About me
    • Troubleshooting Keystone
    • Troubleshooting Cinder
    • Troubleshooting Nova

Screen Shot 2014-10-21 at 3.10.45 PM Screen Shot 2014-10-21 at 3.10.52 PM Screen Shot 2014-10-21 at 3.11.00 PMScreen Shot 2014-10-21 at 3.11.23 PM Screen Shot 2014-10-21 at 3.11.32 PM Screen Shot 2014-10-21 at 3.11.41 PM Screen Shot 2014-10-21 at 3.11.48 PM  Screen Shot 2014-10-21 at 3.11.55 PM  Screen Shot 2014-10-21 at 3.12.03 PM Screen Shot 2014-10-21 at 3.12.10 PM

Unfortunately, I haven’t received the slides from Robin and John yet. However, as soon as I receive them, I’ll add them to this post.

Jumbo Frames – Do It Right

Configuring jumbo frames can be such a pain if it doesn’t get done properly. Over the last couple of years, I have seen many customers having mismatched MTUs due to improperly configured jumbo frames.. If it is done properly, jumbo frames can increase the overall network performance between your hosts and your storage array. It is recommendable to use it if you have 10GbE connection to your storage device. However, if it is not configured properly, jumbo frames quickly become your worst nightmare. I have seen it causing performance issues, drops of connection as well as ESXi hosts losing storage devices.

Now, we all know what kind of issues jumbo frames can cause as well as it is advisable to use it if you have a 10GbE connection to your storage device. However, let’s discuss some details about jumbo frames:

  • Larger than 1500 bytes
  • Many devices support up to 9216 bytes
    • Refer to your switch manual for the proper setting
  • Most people will refer to jumbo frame as a MTU 9000 bytes
  • It often causes a MTU mismatch due to misconfiguration

 

Below’s steps offer guidance on how to setup jumbo frame properly:

Note: I recommend to schedule a maintenance window for this change!

On your Cisco Switch:

Please take a look at this Cisco page which lists the syntax for most of their switches.
Once the switch ports have been configured properly, we can go ahead and change the networking settings on the storage device.

On Nimble OS 1.4.x:

  1. Go to Manage -> Array -> Edit Network Addresses
  2. Change the MTU of your data interfaces from 1500 to jumbo

nimble_1-4-X_jumbo

On Nimble OS 2.x:

  1. Go to Administration -> Network Configuration -> Active Settings -> Subnets
  2. Select your data subnet and click on edit. Change the MTU of your data interfaces from 1500 to jumbo.

NimbleOS2X_jumbo

 

On ESXi 5.x:

  1. Connect to your vCenter using the vSphere Client
  2. Go to Home -> Inventory -> Hosts and Clusters
  3. Select your ESXi host and click on Configuration -> NetworkingESXi_networking
  4. Click on Properties of the vSwitch which you want to configure for jumbo framesvSwitch_properties
  5. Select the vSwitch and click on Edit.
  6. Under “Advanced Properties”, change the MTU from 1500 to 9000 and click ok.vSwitch_Jumbo
  7. Next, select your vmkernel port and click on Edit.
  8. Under “NIC settings” you can change the MTU to 9000.vmk_jumbo
  9. Follow step 7 & 8 for all your vmkernel ports within this vSwitch.

After you changed the settings on your storage device, switch and ESXi host, log in to your ESXi host via SSH and run the following command to verify that jumbo frames are working from end to end:

vmkping -d -s 8972 -I vmkport_with_MTU_9000 storage_data_ip

If the ping succeeds, you’ve configured jumbo frames correctly.

Invalid OpenStack Nova Credentials

While playing around with my OpenStack Icehouse installation today, I went ahead and changed the password for the admin user via WebUI (Horizon). After, I logged into the CLI and tried to run some commands got an error saying Error: Invalid OpenStack Nova credentials

[root@jschwoebel ~]# source keystonerc_admin
[root@jschwoebel ~(keystone_admin)]# nova list
ERROR: Invalid OpenStack Nova credentials.
[root@jschwoebel ~(keystone_admin)]# cinder list
ERROR: Invalid OpenStack Cinder credentials.

After some troubleshooting, I realized that when changing the password via the Horizon, the keystonerc_admin file doesn’t automatically get updated and you have to do it manually.

Below are the steps for changing the admin password in OpenStack Icehouse:

1. Change the admin password in Horizon

Screen Shot 2014-10-03 at 12.55.55 PM

Screen Shot 2014-10-03 at 12.56.34 PM

 

Screen Shot 2014-10-03 at 12.56.54 PM

 

On the CLI:

1. Verify that the ~/keystonerc_admin file is still showing your old password

[root@jschwoebel ~(keystone_admin)]# cat keystonerc_admin

export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=demo
export OS_AUTH_URL=http://10.66.32.196:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '

2. Modify ~/keystonerc_admin and change OS_PASSWORD to the new password
3. Confirm that the OS_PASSWORD has been changed

[root@jschwoebel ~(keystone_admin)]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=demo1
export OS_AUTH_URL=http://10.66.32.196:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '

4. Source ~/keystonerc_admin  again

[root@jschwoebel ~(keystone_admin)]# source keystonerc_admin

5. Test any OpenStack specific command

[root@jschwoebel ~(keystone_admin)]# nova list
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
| 3aa85e86-efe0-419b-bec0-2f23549cc51d | lee-01-3aa85e86-efe0-419b-bec0-2f23549cc51d | ACTIVE | - | Running | public=172.24.4.228 |
| 8788da47-f4da-46a2-a1fe-25b242961d12 | lee-01-8788da47-f4da-46a2-a1fe-25b242961d12 | ACTIVE | - | Running | public=172.24.4.229 |
| 9b0d741e-da3d-4c3b-bb9d-db810557096e | lee-03 | ACTIVE | - | Running | public=172.24.4.231 |
| 9a5ef03f-c644-409a-aa8f-e9c306f5139c | lee-04 | ACTIVE | - | Running | public=172.24.4.233 |
| a160bee3-e686-4682-8593-164eebe0b5d4 | lee-05 | ACTIVE | - | Running | public=172.24.4.235 |
| aa232eb4-aef4-4b06-8196-6221f7f3bd73 | whatever | ACTIVE | - | Running | public=172.24.4.237 |
| f2e4fff3-2622-4b1a-a091-b4230e414f91 | yeah | ACTIVE | - | Running | public=172.24.4.227 |
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
[root@jschwoebel ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| 0cfddbf5-4dcf-4cb6-9332-c37e167e2861 | in-use | | 1 | None | true | 9a5ef03f-c644-409a-aa8f-e9c306f5139c |

 

Windows 2012 with e1000e could cause data corruption

A couple of days ago, I spent two hours setting up two Windows Server 2012 VMs on my ESXi 5.1 cluster and tried to get some performance tests done. When copying multiple ISOs across the network between those two VMs, I received an error that neither of my 5 ISOs could be opened on the destination.

After checking the settings of my VMs, I saw that I used the default e1000e vNICs. Apparently, this is a known issue with Windows Server 2012 VMs using e1000e vNICs, running on top of VMware ESXi 5.0 and 5.1.

The scary part is, the e1000e vNIC is the default vNIC by VMware when creating a new VM. This means, if you don’t carefully select the correct vNIC  type when creating your VM, you could potentially run into the data corruption issue.
The easiest workaround is to change the vNIC type from e1000e to e1000 or VMXNET3. However, if you use DHCP, your VM will get a new IP assigned as the DHCP server will recognize the new NIC.

If you prefer not to change the vNIC type, you might just want to disable TCP segmentation offload on the Windows 2012 VMs.
There are three settings which should be changed:

IPv4 Checksum Offload

IPv4_Checksum_Offload

 

Large Send Offload (IPv4)

Large_Send_Offload_IPv4

 

TCP Checksum Offload (IPv4)

TCP_Checksum_Offload

 

Further details can be found in VMware KBA 2058692.