OpenStack & Nimble Storage ITO feature

Nimble Storage’s Cinder Driver includes a new feature called ITO – Image Transfer Optimization.

With most cinder backends, every time you deploy a new instance from an image, a volume/LUN gets created on the backend storage.
This means you might potentially use up a lot of space for data which is redundant.
In order to avoid unnecessary duplication of data, Nimble Storage introduced ITO – Image Transfer Optimization.

ITO will be helpful in cases where you might want to create 20 instances at a time from the same ISO.
With ITO, only one volume with the ISO will be created and then zero-copy clones will be utilized in order to boot the other 19 instances.
This seems to be the most space efficient way for deploying instances.

The benefits are simple:

  • Instant Copy
  • No duplicated data
  • Shared Cache

Below, you can see the workflow for deploying instances without ITO enabled:
no_ito

And here with ITO enabled:

ito_enabled

 

Thanks to @wensteryu for the images.

OpenStack & Nimble Storage – Cinder multi backend

This post describes how to setup a Nimble Storage array within a cinder multi backend configuration, running OpenStack Icehouse.
If you are new to OpenStack or cinder, you might be asking why should you run single-backend vs multi-backend.

Basically, single-backend means you are using a single storage array/single group of arrays as your backend storage. In a multi backend configuration, you might have storage arrays from multiple vendors or you might just have different Nimble Storage arrays which provide several levels of performance. For example, you might want to use your CS700 as high-performance storage and your CS220 as a less performance intensive storage.

  1. Upload your Cinder driver to /usr/lib/python2.6/site-packages/cinder/volume/drivers
  2. Add theNimbleCinderparameters to /etc/cinder/cinder.conf as a new section
    #Nimble Cinder Configuration 
    [nimble-cinder]
    san_ip=management-ip
    san_login=admin_user
    san_password=password
    volume_driver=cinder.volume.drivers.nimble.NimbleISCSIDriver
  3. Add [nimble-cinder] to enabled-backend.Ifenabled-backends does not yet exist in your cinder.conf file, please add the following line:
    enabled_backends=nimble-cinder,any_additional_backends
  4. Create a new volume type for the nimble-cinder backend:
    root@TS-Training-OS-01 nova(keystone_admin)]# cinder type-create nimble1
  5. Next, link the backend name to the volume type:
    root@TS-Training-OS-01 nova(keystone_admin)]# cinder type-key nimble1 set volume_backend_name=nimble-cinder
  6. Restartcinder-api, cinder-scheduler and cinder-volume
    root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-scheduler restart
          Stopping openstack-cinder-scheduler:                       [  OK  ]
          Starting openstack-cinder-scheduler:                       [  OK  ]
          [root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-api restart
          Stopping openstack-cinder-api:                             [  OK  ]
          Starting openstack-cinder-api:                             [  OK  ]
          [root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-volume restart
          Stopping openstack-cinder-volume:                          [  OK  ]
          Starting openstack-cinder-volume:                          [  OK  ]
  7. Create a volume either via Horizon or the CLI
    root@TS-Training-OS-01 nova(keystone_admin)]# cinder create --volume_type nimble1 --display-name test_volume 50
          +---------------------+--------------------------------------+
          |       Property      |                Value                 |
          +---------------------+--------------------------------------+
          |     attachments     |                  []                  |
          |  availability_zone  |                 nova                 |
          |       bootable      |                false                 |
          |      created_at     |      2014-11-05T18:23:54.011013      |
          | display_description |                 None                 |     
          |     display_name    |             test_volume              |
          |      encrypted      |                False                 |
          |          id         | 6cce44ad-a71f-4973-b862-aefe9c5f0a79 |
          |       metadata      |                  {}                  |
          |         size        |                  50                  |
          |     snapshot_id     |                 None                 |
          |     source_volid    |                 None                 |
          |         status      |                creating              |
          |     volume_type     |                nimble1               |
          +---------------------+--------------------------------------+
  8. Verify the volume has successfully been created
     [root@TS-Training-OS-01 nova(keystone_admin)]# cinder list
     Screen Shot 2014-11-11 at 5.03.29 PM
  9. Verify the creation of the volume on your storage array. Go to Manage -> Volumes
    openstack_array

 

Nimble Storage Cinder Integration

In this post, I will only cover the Nimble Storage Cinder Integration for OpenStack Icehouse.

Some of you have already an OpenStack cloud in their environment and also own a Nimble Storage array, others might not have an OpenStack cloud yet but consider it. Nimble Storage just announced officially their OpenStack integration. Starting with Juno, the Nimble Cinder driver will be shipped with the OpenStack release. The actual approval and blue print can be found here. For Icehouse, you’ll need to download the driver from InfoSight or request it from support.

Follow these 6 steps to upload the Nimble Cinder driver, configure and test it:

Note: The steps below cover a single-backend configuration. A multi-backend configuration will be covered in a separate post.

  1. Upload your Cinder driver to /usr/lib/python2.6/site-packages/cinder/volume/drivers
  2. Add theNimbleCinderparameters to /etc/cinder/cinder.conf within the [DEFAULT] section
    #Nimble Cinder Configuration 
    san_ip=management-ip
    san_login=admin_user
    san_password=password
    volume_driver=cinder.volume.drivers.nimble.NimbleISCSIDriver
  3. Restartcinder-api, cinder-scheduler and cinder-volume
    root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-scheduler restart
          Stopping openstack-cinder-scheduler:                       [  OK  ]
          Starting openstack-cinder-scheduler:                       [  OK  ]
          [root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-api restart
          Stopping openstack-cinder-api:                             [  OK  ]
          Starting openstack-cinder-api:                             [  OK  ]
          [root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-volume restart
          Stopping openstack-cinder-volume:                          [  OK  ]
          Starting openstack-cinder-volume:                          [  OK  ]
  4. Create a volume either via Horizon or the CLI
    root@TS-Training-OS-01 nova(keystone_admin)]# cinder create --display-name test_volume 50
          +---------------------+--------------------------------------+
          |       Property      |                Value                 |
          +---------------------+--------------------------------------+
          |     attachments     |                  []                  |
          |  availability_zone  |                 nova                 |
          |       bootable      |                false                 |
          |      created_at     |      2014-11-05T18:23:54.011013      |
          | display_description |                 None                 |     
          |     display_name    |             test_volume              |
          |      encrypted      |                False                 |
          |          id         | 6cce44ad-a71f-4973-b862-aefe9c5f0a79 |
          |       metadata      |                  {}                  |
          |         size        |                  50                  |
          |     snapshot_id     |                 None                 |
          |     source_volid    |                 None                 |
          |         status      |                creating              |
          |     volume_type     |                 None                 |
          +---------------------+--------------------------------------+
  5. Verify the volume has successfully been created
     [root@TS-Training-OS-01 nova(keystone_admin)]# cinder list
     cinder-list
  6. Verify the creation of the volume on your storage array. Go to Manage -> Volumes
    openstack_array

 

Silicon Valley OpenStack Ops Meetup

Yesterday, I attended the Silicon Valley OpenStack Ops Meetup and held a troubleshooting session focusing on Cinder, Keystone and Nova. The event was hosted by Nimble Storage, ElasticBox and SwiftStack. The focus of this Meetup was to share tips and tricks.

The event was hosted at Nimble Storage’s campus in San Jose, CA.

Nimble_Storage_HQ211-281 River Oaks Pwky

Even though the San Francisco Giants were playing their first World Series game, roughly 100 people attended the event. I think this is a pretty good turnout for the first Silicon Valley OpenStack Ops Meetup.

20141021_182130  20141021_182138

I was lucky and gathered a slot together with Wen Yu to cover OpenStack Shared Storage and Troubleshooting Tips and Tricks. To be honest, I have never been this nervous before. This was the first time speaking in front of more than 20 people.

 

Agenda:

5:45 PM – Doors Open, Food Served, Meet and Greet
6:20 PM – Bill Borsari & Pat Darisme ( Meetup Organizers ),  Nimble Storage – Meet Up kickoff
6:30 PM – Ravi Srivatsav ( CEO ), ElasticBox – Avoiding cloud lock-in to give you total freedom to build, manage, and deploy applications faster than ever before.
6:50 PM – John Dickinson ( SwiftStack technical lead & OpenStack Swift PLM ), SwiftStack – Swift Product Line Manager talks about Object Storage and Swift in the Enterprise
7:10 PMWen Yu ( Nimble Product Manager ) & Jan Schwoebel ( Nimble Virtualization Support Lead ), Nimble Storage – OpenStack Shared Storage and TroubleShooting Tips and Tricks
7:30 PM – 9 PM – Meet the Presenters

  • Bill Bosari and Pat Darisme kicked off the event and welcomed all participants, who made it to the event even though the SF Giants had the first World Series Game.
  • Robin
    • OverviewofElasticBox
      • Mission: ElasticBox empowers business to innovate faster by making it insanely easy for IT, ops and developers to build, manage and deploy applications in the cloud
      • Architecture:
        • Build any application and host it within any, supported, cloud (Amazon, Google, VMware, OpenStack,…)
        • Seamlessly migrate applications from cloud to cloud, don’t be locked down to one cloud solution
        • Share applications and “boxes” with people
          • Boxes are a bundle of packages
  • John Dickinson – Slides can be found here 
    • What is Swift?
      • Swift is an Object Store
      • Great for unstructured data which grows and grows (Images, Videos, Documents,…)
    • What problem does Swift solve?
      • It is build for availability and durability
      • Users do no longer have to worry about where the data is located
      • Great manageability
      • Migrate data without any downtime for your users
    • HowdoesSwiftStack fit in?
      • Provides a manage and control center for Swift
      • Add two additional components, controller & gateway
      • Gateway is a SMB/CIFS and NFS server
      • SwiftStack will provide an all-day workshop in SF on October 28th. Details can be found here
  • Wen Yu
    • Value of Shared Storage
    • Nimble Cinder Features
    • ITO – Image Transfer Optimization.

Screen Shot 2014-10-21 at 3.02.37 PMScreen Shot 2014-10-21 at 3.06.26 PM Screen Shot 2014-10-21 at 3.02.54 PM Screen Shot 2014-10-21 at 3.03.01 PM Screen Shot 2014-10-21 at 3.03.13 PM Screen Shot 2014-10-21 at 3.03.24 PM Screen Shot 2014-10-21 at 3.02.44 PM

  • Jan Schwoebel
    • OpenStack Troubleshooting and Tips
    • About me
    • Troubleshooting Keystone
    • Troubleshooting Cinder
    • Troubleshooting Nova

Screen Shot 2014-10-21 at 3.10.45 PM Screen Shot 2014-10-21 at 3.10.52 PM Screen Shot 2014-10-21 at 3.11.00 PMScreen Shot 2014-10-21 at 3.11.23 PM Screen Shot 2014-10-21 at 3.11.32 PM Screen Shot 2014-10-21 at 3.11.41 PM Screen Shot 2014-10-21 at 3.11.48 PM  Screen Shot 2014-10-21 at 3.11.55 PM  Screen Shot 2014-10-21 at 3.12.03 PM Screen Shot 2014-10-21 at 3.12.10 PM

Unfortunately, I haven’t received the slides from Robin and John yet. However, as soon as I receive them, I’ll add them to this post.

Invalid OpenStack Nova Credentials

While playing around with my OpenStack Icehouse installation today, I went ahead and changed the password for the admin user via WebUI (Horizon). After, I logged into the CLI and tried to run some commands got an error saying Error: Invalid OpenStack Nova credentials

[root@jschwoebel ~]# source keystonerc_admin
[root@jschwoebel ~(keystone_admin)]# nova list
ERROR: Invalid OpenStack Nova credentials.
[root@jschwoebel ~(keystone_admin)]# cinder list
ERROR: Invalid OpenStack Cinder credentials.

After some troubleshooting, I realized that when changing the password via the Horizon, the keystonerc_admin file doesn’t automatically get updated and you have to do it manually.

Below are the steps for changing the admin password in OpenStack Icehouse:

1. Change the admin password in Horizon

Screen Shot 2014-10-03 at 12.55.55 PM

Screen Shot 2014-10-03 at 12.56.34 PM

 

Screen Shot 2014-10-03 at 12.56.54 PM

 

On the CLI:

1. Verify that the ~/keystonerc_admin file is still showing your old password

[root@jschwoebel ~(keystone_admin)]# cat keystonerc_admin

export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=demo
export OS_AUTH_URL=http://10.66.32.196:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '

2. Modify ~/keystonerc_admin and change OS_PASSWORD to the new password
3. Confirm that the OS_PASSWORD has been changed

[root@jschwoebel ~(keystone_admin)]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=demo1
export OS_AUTH_URL=http://10.66.32.196:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '

4. Source ~/keystonerc_admin  again

[root@jschwoebel ~(keystone_admin)]# source keystonerc_admin

5. Test any OpenStack specific command

[root@jschwoebel ~(keystone_admin)]# nova list
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
| 3aa85e86-efe0-419b-bec0-2f23549cc51d | lee-01-3aa85e86-efe0-419b-bec0-2f23549cc51d | ACTIVE | - | Running | public=172.24.4.228 |
| 8788da47-f4da-46a2-a1fe-25b242961d12 | lee-01-8788da47-f4da-46a2-a1fe-25b242961d12 | ACTIVE | - | Running | public=172.24.4.229 |
| 9b0d741e-da3d-4c3b-bb9d-db810557096e | lee-03 | ACTIVE | - | Running | public=172.24.4.231 |
| 9a5ef03f-c644-409a-aa8f-e9c306f5139c | lee-04 | ACTIVE | - | Running | public=172.24.4.233 |
| a160bee3-e686-4682-8593-164eebe0b5d4 | lee-05 | ACTIVE | - | Running | public=172.24.4.235 |
| aa232eb4-aef4-4b06-8196-6221f7f3bd73 | whatever | ACTIVE | - | Running | public=172.24.4.237 |
| f2e4fff3-2622-4b1a-a091-b4230e414f91 | yeah | ACTIVE | - | Running | public=172.24.4.227 |
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
[root@jschwoebel ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| 0cfddbf5-4dcf-4cb6-9332-c37e167e2861 | in-use | | 1 | None | true | 9a5ef03f-c644-409a-aa8f-e9c306f5139c |