Nimble Storage Cinder Integration

In this post, I will only cover the Nimble Storage Cinder Integration for OpenStack Icehouse.

Some of you have already an OpenStack cloud in their environment and also own a Nimble Storage array, others might not have an OpenStack cloud yet but consider it. Nimble Storage just announced officially their OpenStack integration. Starting with Juno, the Nimble Cinder driver will be shipped with the OpenStack release. The actual approval and blue print can be found here. For Icehouse, you’ll need to download the driver from InfoSight or request it from support.

Follow these 6 steps to upload the Nimble Cinder driver, configure and test it:

Note: The steps below cover a single-backend configuration. A multi-backend configuration will be covered in a separate post.

  1. Upload your Cinder driver to /usr/lib/python2.6/site-packages/cinder/volume/drivers
  2. Add theNimbleCinderparameters to /etc/cinder/cinder.conf within the [DEFAULT] section
    #Nimble Cinder Configuration 
    san_ip=management-ip
    san_login=admin_user
    san_password=password
    volume_driver=cinder.volume.drivers.nimble.NimbleISCSIDriver
  3. Restartcinder-api, cinder-scheduler and cinder-volume
    root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-scheduler restart
          Stopping openstack-cinder-scheduler:                       [  OK  ]
          Starting openstack-cinder-scheduler:                       [  OK  ]
          [root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-api restart
          Stopping openstack-cinder-api:                             [  OK  ]
          Starting openstack-cinder-api:                             [  OK  ]
          [root@TS-Training-OS-01 (keystone_admin)]# service openstack-cinder-volume restart
          Stopping openstack-cinder-volume:                          [  OK  ]
          Starting openstack-cinder-volume:                          [  OK  ]
  4. Create a volume either via Horizon or the CLI
    root@TS-Training-OS-01 nova(keystone_admin)]# cinder create --display-name test_volume 50
          +---------------------+--------------------------------------+
          |       Property      |                Value                 |
          +---------------------+--------------------------------------+
          |     attachments     |                  []                  |
          |  availability_zone  |                 nova                 |
          |       bootable      |                false                 |
          |      created_at     |      2014-11-05T18:23:54.011013      |
          | display_description |                 None                 |     
          |     display_name    |             test_volume              |
          |      encrypted      |                False                 |
          |          id         | 6cce44ad-a71f-4973-b862-aefe9c5f0a79 |
          |       metadata      |                  {}                  |
          |         size        |                  50                  |
          |     snapshot_id     |                 None                 |
          |     source_volid    |                 None                 |
          |         status      |                creating              |
          |     volume_type     |                 None                 |
          +---------------------+--------------------------------------+
  5. Verify the volume has successfully been created
     [root@TS-Training-OS-01 nova(keystone_admin)]# cinder list
     cinder-list
  6. Verify the creation of the volume on your storage array. Go to Manage -> Volumes
    openstack_array

 

Invalid OpenStack Nova Credentials

While playing around with my OpenStack Icehouse installation today, I went ahead and changed the password for the admin user via WebUI (Horizon). After, I logged into the CLI and tried to run some commands got an error saying Error: Invalid OpenStack Nova credentials

[root@jschwoebel ~]# source keystonerc_admin
[root@jschwoebel ~(keystone_admin)]# nova list
ERROR: Invalid OpenStack Nova credentials.
[root@jschwoebel ~(keystone_admin)]# cinder list
ERROR: Invalid OpenStack Cinder credentials.

After some troubleshooting, I realized that when changing the password via the Horizon, the keystonerc_admin file doesn’t automatically get updated and you have to do it manually.

Below are the steps for changing the admin password in OpenStack Icehouse:

1. Change the admin password in Horizon

Screen Shot 2014-10-03 at 12.55.55 PM

Screen Shot 2014-10-03 at 12.56.34 PM

 

Screen Shot 2014-10-03 at 12.56.54 PM

 

On the CLI:

1. Verify that the ~/keystonerc_admin file is still showing your old password

[root@jschwoebel ~(keystone_admin)]# cat keystonerc_admin

export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=demo
export OS_AUTH_URL=http://10.66.32.196:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '

2. Modify ~/keystonerc_admin and change OS_PASSWORD to the new password
3. Confirm that the OS_PASSWORD has been changed

[root@jschwoebel ~(keystone_admin)]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=demo1
export OS_AUTH_URL=http://10.66.32.196:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '

4. Source ~/keystonerc_admin  again

[root@jschwoebel ~(keystone_admin)]# source keystonerc_admin

5. Test any OpenStack specific command

[root@jschwoebel ~(keystone_admin)]# nova list
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
| 3aa85e86-efe0-419b-bec0-2f23549cc51d | lee-01-3aa85e86-efe0-419b-bec0-2f23549cc51d | ACTIVE | - | Running | public=172.24.4.228 |
| 8788da47-f4da-46a2-a1fe-25b242961d12 | lee-01-8788da47-f4da-46a2-a1fe-25b242961d12 | ACTIVE | - | Running | public=172.24.4.229 |
| 9b0d741e-da3d-4c3b-bb9d-db810557096e | lee-03 | ACTIVE | - | Running | public=172.24.4.231 |
| 9a5ef03f-c644-409a-aa8f-e9c306f5139c | lee-04 | ACTIVE | - | Running | public=172.24.4.233 |
| a160bee3-e686-4682-8593-164eebe0b5d4 | lee-05 | ACTIVE | - | Running | public=172.24.4.235 |
| aa232eb4-aef4-4b06-8196-6221f7f3bd73 | whatever | ACTIVE | - | Running | public=172.24.4.237 |
| f2e4fff3-2622-4b1a-a091-b4230e414f91 | yeah | ACTIVE | - | Running | public=172.24.4.227 |
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
[root@jschwoebel ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| 0cfddbf5-4dcf-4cb6-9332-c37e167e2861 | in-use | | 1 | None | true | 9a5ef03f-c644-409a-aa8f-e9c306f5139c |

 

OpenStack – Icehouse Deployment Via Packstack

Today I decided to set-up a new OpenStack environment to run some tests and provide a training on it.
This blog post will cover “OpenStack – Icehouse Deployment Via Packstack”.

There are several ways to deploy a OpenStack environment with a single-node or a multi-node:

  1. Packstack – Quickest and easiest way to deploy a single-node or multi-node OpenStack lab on any RHEL distribution
  2. Devstack – Mainly used for development, requires more time then packstack
  3. Juju – Very time-consuming setup but very stable, Ubuntu only.
  4. The manual way, most time-consuming, recommended for production environments.
    Detail can be found here.

In my scenario I deployed 4 CentOS 6.4 64bit VMs with each having 2x2vCPUs, 4GB memory, 2x NIC cards (one for MGMT, one for iSCSI – no MPIO).
After you completed the CentOS 6.4 installation, follow the steps below:

  1. Install the RDO repositories
    [root@TS-Training-OS-01 ~]# sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
  2. Install openstack-packstack
    [root@TS-Training-OS-01 ~]# yum install -y openstack-packstack
  3. Verify that packstack has successfully been installed
    [root@TS-Training-OS-01 ~]# which packstack
    /usr/bin/packstack
  4. Use packstack to deploy OpenStack. Syntax - $ packstack --install-hosts=Controller_Address,Node_addresses
    [root@TS-Training-OS-01 ~# packstack --install-hosts=10.18.48.50,10.18.48.51,10.18.48.52,10.18.48.53
    In my scenario 10.18.48.50 will be used as the controller and .51, .52 and .53 will be used as NOVA compute nodes.
  5. Once the installation has been completed, you'll see the following output on your CLI:
     **** Installation completed successfully ******
    
    Additional information:
     * A new answerfile was created in: /root/packstack-answers-20140908-134351.txt
     * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be a problem for some OpenStack components.
     * File /root/keystonerc_admin has been created on OpenStack client host 10.18.48.50. To use the command line tools you need to source the file.
     * To access the OpenStack Dashboard browse to http://10.18.48.50/dashboard .
    Please, find your login credentials stored in the keystonerc_admin in your home directory.
     * To use Nagios, browse to http://10.18.48.50/nagios username: nagiosadmin, password: some_random_password
     * Because of the kernel update the host 10.18.48.50 requires reboot.
     * Because of the kernel update the host 10.18.48.51 requires reboot.
     * Because of the kernel update the host 10.18.48.52 requires reboot.
     * Because of the kernel update the host 10.18.48.53 requires reboot.
     * The installation log file is available at: /var/tmp/packstack/20140908-134351-aEbkHs/openstack-setup.log
     * The generated manifests are available at: /var/tmp/packstack/20140908-134351-aEbkHs/manifests
  6. Reboot all hosts to apply the kernel updates.
  7. You can now access Horizon (dashboard) via the IP-address of your controller node, in this scenario http://10.18.48.50/dashboard
  8. If you prefer the CLI, SSH to the controller node and source the keystonerc_admin file to become keystone_admin.
     [root@TS-Training-OS-01 ~]# source keystonerc_admin
     [root@TS-Training-OS-01 ~(keystone_admin)]# You are KEYSTONE_ADMIN now!!!
    
    

The initial install of OpenStack via packstack has been completed and you can start to configure it via CLI or using Horizon.