Invalid OpenStack Nova Credentials

While playing around with my OpenStack Icehouse installation today, I went ahead and changed the password for the admin user via WebUI (Horizon). After, I logged into the CLI and tried to run some commands got an error saying Error: Invalid OpenStack Nova credentials

[root@jschwoebel ~]# source keystonerc_admin
[root@jschwoebel ~(keystone_admin)]# nova list
ERROR: Invalid OpenStack Nova credentials.
[root@jschwoebel ~(keystone_admin)]# cinder list
ERROR: Invalid OpenStack Cinder credentials.

After some troubleshooting, I realized that when changing the password via the Horizon, the keystonerc_admin file doesn’t automatically get updated and you have to do it manually.

Below are the steps for changing the admin password in OpenStack Icehouse:

1. Change the admin password in Horizon

Screen Shot 2014-10-03 at 12.55.55 PM

Screen Shot 2014-10-03 at 12.56.34 PM

 

Screen Shot 2014-10-03 at 12.56.54 PM

 

On the CLI:

1. Verify that the ~/keystonerc_admin file is still showing your old password

[root@jschwoebel ~(keystone_admin)]# cat keystonerc_admin

export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=demo
export OS_AUTH_URL=http://10.66.32.196:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '

2. Modify ~/keystonerc_admin and change OS_PASSWORD to the new password
3. Confirm that the OS_PASSWORD has been changed

[root@jschwoebel ~(keystone_admin)]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=demo1
export OS_AUTH_URL=http://10.66.32.196:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '

4. Source ~/keystonerc_admin  again

[root@jschwoebel ~(keystone_admin)]# source keystonerc_admin

5. Test any OpenStack specific command

[root@jschwoebel ~(keystone_admin)]# nova list
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
| 3aa85e86-efe0-419b-bec0-2f23549cc51d | lee-01-3aa85e86-efe0-419b-bec0-2f23549cc51d | ACTIVE | - | Running | public=172.24.4.228 |
| 8788da47-f4da-46a2-a1fe-25b242961d12 | lee-01-8788da47-f4da-46a2-a1fe-25b242961d12 | ACTIVE | - | Running | public=172.24.4.229 |
| 9b0d741e-da3d-4c3b-bb9d-db810557096e | lee-03 | ACTIVE | - | Running | public=172.24.4.231 |
| 9a5ef03f-c644-409a-aa8f-e9c306f5139c | lee-04 | ACTIVE | - | Running | public=172.24.4.233 |
| a160bee3-e686-4682-8593-164eebe0b5d4 | lee-05 | ACTIVE | - | Running | public=172.24.4.235 |
| aa232eb4-aef4-4b06-8196-6221f7f3bd73 | whatever | ACTIVE | - | Running | public=172.24.4.237 |
| f2e4fff3-2622-4b1a-a091-b4230e414f91 | yeah | ACTIVE | - | Running | public=172.24.4.227 |
+--------------------------------------+---------------------------------------------+--------+------------+-------------+---------------------+
[root@jschwoebel ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| 0cfddbf5-4dcf-4cb6-9332-c37e167e2861 | in-use | | 1 | None | true | 9a5ef03f-c644-409a-aa8f-e9c306f5139c |

 

Windows 2012 with e1000e could cause data corruption

A couple of days ago, I spent two hours setting up two Windows Server 2012 VMs on my ESXi 5.1 cluster and tried to get some performance tests done. When copying multiple ISOs across the network between those two VMs, I received an error that neither of my 5 ISOs could be opened on the destination.

After checking the settings of my VMs, I saw that I used the default e1000e vNICs. Apparently, this is a known issue with Windows Server 2012 VMs using e1000e vNICs, running on top of VMware ESXi 5.0 and 5.1.

The scary part is, the e1000e vNIC is the default vNIC by VMware when creating a new VM. This means, if you don’t carefully select the correct vNIC  type when creating your VM, you could potentially run into the data corruption issue.
The easiest workaround is to change the vNIC type from e1000e to e1000 or VMXNET3. However, if you use DHCP, your VM will get a new IP assigned as the DHCP server will recognize the new NIC.

If you prefer not to change the vNIC type, you might just want to disable TCP segmentation offload on the Windows 2012 VMs.
There are three settings which should be changed:

IPv4 Checksum Offload

IPv4_Checksum_Offload

 

Large Send Offload (IPv4)

Large_Send_Offload_IPv4

 

TCP Checksum Offload (IPv4)

TCP_Checksum_Offload

 

Further details can be found in VMware KBA 2058692.

 

SCSI UNMAP – VMware ESXi and Nimble Storage Array

Starting with VMware ESXi 5.0, VMware introduced the SCSI UNMAP primitive (VAAI Thin Provisioning Block Reclaim) to their VAAI feature collection for thin provisioned LUNs. VMware even automated the SCSI UNMAP process, however, starting with ESXi 5.0U1, SCSI UNMAP became a manual process. Also, SCSI UNMAP needs to be supported by your underlying SAN array. Nimble Storage started to support SCSI UNMAPs with Nimble OS version 1.4.3.0 and later.


What is the problem?

When deleting a file from your VMFS5 datastore (thin provisioned), the usage reported on your datastore and the underlying Nimble Storage volume will not match. The Nimble Storage volume is not aware of any space reclaimed within the VMFS5 datastore. This could be caused by a single file like an ISO but also be due to the deletion of a whole virtual machine.

What version of VMFS is supported?

You can run SCSI UNMAPs against VMFS5 and upgraded VMFS3-to-VMFS5 datastores.

What needs to be done on the Nimble Storage array?

SCSI UNMAP is supported by Nimble Storage arrays starting from version 1.4.3.0 and later.
There is nothing to be done on the array.

How do I run SCSI UNMAP on VMware ESXi 5.x?

  1. Establish a SSH session to your ESXi host which has the datastore mounted.
  2. Run esxcli storage core path list | grep -e ‘Device Display Name’ -e ‘Target Transport Details’  to get a list of volumes including the EUI identifier. list eui for scsi unmap
  3. Run VAAI status get to verify if SCSI UNMAP (Delete Status) is supported for the volume.
    esxcli storage core device vaai status get -d eui.e5f46fe18c8acb036c9ce900c48a7f60
    eui.e5f46fe18c8acb036c9ce900c48a7f60
    VAAI Plugin Name:
    ATS Status: supported
    Clone Status: unsupported
    Zero Status: supported
    Delete Status: supported
  4. Change to the datastore directory.
    cd /vmfs/volumes/
  5. Run vmkfstools to trigger SCSI UNMAPs.
    vmkstools -y
    For ESXi 5.5: Use 
    esxcli storage vmfs unmap -l
    Note: the value for the percentage has to be between 0 and 100. Generally, I recommend using 60 to start with.
  6. Wait until the ESXi host returns “Done”.

 

Further details for ESXi 5.0 and 5.1 can be found here  and for ESXi 5.5, please click here.

 

 

Change The OpenStack Glance Image Store

Today I ran into an issue where I ran out of space on my root partition due to multiple ISOs which I have stored in OpenStack Glance. After some tests, I decided to change the Glance image store to an iSCSI volume attached to my controller.

Let’s get started with the basic iSCSI setup (no MPIO), I assume you’ve already created a volume on your storage and set the ACL accordingly :

  1. Identify your storage system's iSCSI Discovery IP address
  2. Use iscsiadm to discover the volumes:
    [root@TS-Training-OS-01 ~]# iscsiadm -m discovery -t sendtargets -p discovery_IP
    In my case the following volume has been discovered:
    172.21.8.155:3260,2460 iqn.2007-11.com.nimblestorage:jan-openstack-glance-v2057ea2dd8c4465b.00000027.f893ac76
  3. Establish a connection to the appropriate volume:
    [root@TS-Training-OS-01 ~]# iscsiadm --mode node --targetname iqn.2007-11.com.nimblestorage:jan-openstack-glance-v2057ea2dd8c4465b.00000027.f893ac76 --portal discovery_ip:3260 --login
  4. Once the volume has been connected, use fdisk -l to identify the new disk, in my case it is /dev/sdc. Use mkfs to format the disk in ext4:
    [root@TS-Training-OS-01 ~]# mkfs.ext4 -b 4096 /dev/sdc
  5. After the device has been formatted, create a mount-point and change the permissions on it:
    [root@TS-Training-OS-01 ~]# mkdir /mnt/glance
    [root@TS-Training-OS-01 ~]# chmod 777 /mnt/glance
  6. Configure fstab to automatically mount /dev/sdc to /mnt/glance after a reboot. Add the following line to /etc/fstab:
    /dev/sdc        /mnt/glance  ext4    defaults        0       0
  7. Mount /dev/sdc to /mnt/glance by running the following command:
    [root@TS-Training-OS-01 ~]# mount /dev/sdc /mnt/glance
  8. Since we've mounted the new disk to our mount-point, we can go ahead and change the following within /etc/glance/glance-api.conf:
    # ============ Filesystem Store Options ========================
    
    # Directory that the Filesystem backend store
    # writes image data to
    #filesystem_store_datadir=/var/lib/glance/images/
    filesystem_store_datadir=/mnt/glance/
  9. Now, restart the glance-api service and any newly uploaded image through glance will be located under /mnt/glance on your controller.
    [root@TS-Training-OS-01 ~]# service openstack-glance-api restart

OpenStack – Icehouse Deployment Via Packstack

Today I decided to set-up a new OpenStack environment to run some tests and provide a training on it.
This blog post will cover “OpenStack – Icehouse Deployment Via Packstack”.

There are several ways to deploy a OpenStack environment with a single-node or a multi-node:

  1. Packstack – Quickest and easiest way to deploy a single-node or multi-node OpenStack lab on any RHEL distribution
  2. Devstack – Mainly used for development, requires more time then packstack
  3. Juju – Very time-consuming setup but very stable, Ubuntu only.
  4. The manual way, most time-consuming, recommended for production environments.
    Detail can be found here.

In my scenario I deployed 4 CentOS 6.4 64bit VMs with each having 2x2vCPUs, 4GB memory, 2x NIC cards (one for MGMT, one for iSCSI – no MPIO).
After you completed the CentOS 6.4 installation, follow the steps below:

  1. Install the RDO repositories
    [root@TS-Training-OS-01 ~]# sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
  2. Install openstack-packstack
    [root@TS-Training-OS-01 ~]# yum install -y openstack-packstack
  3. Verify that packstack has successfully been installed
    [root@TS-Training-OS-01 ~]# which packstack
    /usr/bin/packstack
  4. Use packstack to deploy OpenStack. Syntax - $ packstack --install-hosts=Controller_Address,Node_addresses
    [root@TS-Training-OS-01 ~# packstack --install-hosts=10.18.48.50,10.18.48.51,10.18.48.52,10.18.48.53
    In my scenario 10.18.48.50 will be used as the controller and .51, .52 and .53 will be used as NOVA compute nodes.
  5. Once the installation has been completed, you'll see the following output on your CLI:
     **** Installation completed successfully ******
    
    Additional information:
     * A new answerfile was created in: /root/packstack-answers-20140908-134351.txt
     * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be a problem for some OpenStack components.
     * File /root/keystonerc_admin has been created on OpenStack client host 10.18.48.50. To use the command line tools you need to source the file.
     * To access the OpenStack Dashboard browse to http://10.18.48.50/dashboard .
    Please, find your login credentials stored in the keystonerc_admin in your home directory.
     * To use Nagios, browse to http://10.18.48.50/nagios username: nagiosadmin, password: some_random_password
     * Because of the kernel update the host 10.18.48.50 requires reboot.
     * Because of the kernel update the host 10.18.48.51 requires reboot.
     * Because of the kernel update the host 10.18.48.52 requires reboot.
     * Because of the kernel update the host 10.18.48.53 requires reboot.
     * The installation log file is available at: /var/tmp/packstack/20140908-134351-aEbkHs/openstack-setup.log
     * The generated manifests are available at: /var/tmp/packstack/20140908-134351-aEbkHs/manifests
  6. Reboot all hosts to apply the kernel updates.
  7. You can now access Horizon (dashboard) via the IP-address of your controller node, in this scenario http://10.18.48.50/dashboard
  8. If you prefer the CLI, SSH to the controller node and source the keystonerc_admin file to become keystone_admin.
     [root@TS-Training-OS-01 ~]# source keystonerc_admin
     [root@TS-Training-OS-01 ~(keystone_admin)]# You are KEYSTONE_ADMIN now!!!
    
    

The initial install of OpenStack via packstack has been completed and you can start to configure it via CLI or using Horizon.