SCSI UNMAP – VMware ESXi and Nimble Storage Array

Starting with VMware ESXi 5.0, VMware introduced the SCSI UNMAP primitive (VAAI Thin Provisioning Block Reclaim) to their VAAI feature collection for thin provisioned LUNs. VMware even automated the SCSI UNMAP process, however, starting with ESXi 5.0U1, SCSI UNMAP became a manual process. Also, SCSI UNMAP needs to be supported by your underlying SAN array. Nimble Storage started to support SCSI UNMAPs with Nimble OS version 1.4.3.0 and later.


What is the problem?

When deleting a file from your VMFS5 datastore (thin provisioned), the usage reported on your datastore and the underlying Nimble Storage volume will not match. The Nimble Storage volume is not aware of any space reclaimed within the VMFS5 datastore. This could be caused by a single file like an ISO but also be due to the deletion of a whole virtual machine.

What version of VMFS is supported?

You can run SCSI UNMAPs against VMFS5 and upgraded VMFS3-to-VMFS5 datastores.

What needs to be done on the Nimble Storage array?

SCSI UNMAP is supported by Nimble Storage arrays starting from version 1.4.3.0 and later.
There is nothing to be done on the array.

How do I run SCSI UNMAP on VMware ESXi 5.x?

  1. Establish a SSH session to your ESXi host which has the datastore mounted.
  2. Run esxcli storage core path list | grep -e ‘Device Display Name’ -e ‘Target Transport Details’  to get a list of volumes including the EUI identifier. list eui for scsi unmap
  3. Run VAAI status get to verify if SCSI UNMAP (Delete Status) is supported for the volume.
    esxcli storage core device vaai status get -d eui.e5f46fe18c8acb036c9ce900c48a7f60
    eui.e5f46fe18c8acb036c9ce900c48a7f60
    VAAI Plugin Name:
    ATS Status: supported
    Clone Status: unsupported
    Zero Status: supported
    Delete Status: supported
  4. Change to the datastore directory.
    cd /vmfs/volumes/
  5. Run vmkfstools to trigger SCSI UNMAPs.
    vmkstools -y
    For ESXi 5.5: Use 
    esxcli storage vmfs unmap -l
    Note: the value for the percentage has to be between 0 and 100. Generally, I recommend using 60 to start with.
  6. Wait until the ESXi host returns “Done”.

 

Further details for ESXi 5.0 and 5.1 can be found here  and for ESXi 5.5, please click here.

 

 

Change The OpenStack Glance Image Store

Today I ran into an issue where I ran out of space on my root partition due to multiple ISOs which I have stored in OpenStack Glance. After some tests, I decided to change the Glance image store to an iSCSI volume attached to my controller.

Let’s get started with the basic iSCSI setup (no MPIO), I assume you’ve already created a volume on your storage and set the ACL accordingly :

  1. Identify your storage system's iSCSI Discovery IP address
  2. Use iscsiadm to discover the volumes:
    [root@TS-Training-OS-01 ~]# iscsiadm -m discovery -t sendtargets -p discovery_IP
    In my case the following volume has been discovered:
    172.21.8.155:3260,2460 iqn.2007-11.com.nimblestorage:jan-openstack-glance-v2057ea2dd8c4465b.00000027.f893ac76
  3. Establish a connection to the appropriate volume:
    [root@TS-Training-OS-01 ~]# iscsiadm --mode node --targetname iqn.2007-11.com.nimblestorage:jan-openstack-glance-v2057ea2dd8c4465b.00000027.f893ac76 --portal discovery_ip:3260 --login
  4. Once the volume has been connected, use fdisk -l to identify the new disk, in my case it is /dev/sdc. Use mkfs to format the disk in ext4:
    [root@TS-Training-OS-01 ~]# mkfs.ext4 -b 4096 /dev/sdc
  5. After the device has been formatted, create a mount-point and change the permissions on it:
    [root@TS-Training-OS-01 ~]# mkdir /mnt/glance
    [root@TS-Training-OS-01 ~]# chmod 777 /mnt/glance
  6. Configure fstab to automatically mount /dev/sdc to /mnt/glance after a reboot. Add the following line to /etc/fstab:
    /dev/sdc        /mnt/glance  ext4    defaults        0       0
  7. Mount /dev/sdc to /mnt/glance by running the following command:
    [root@TS-Training-OS-01 ~]# mount /dev/sdc /mnt/glance
  8. Since we've mounted the new disk to our mount-point, we can go ahead and change the following within /etc/glance/glance-api.conf:
    # ============ Filesystem Store Options ========================
    
    # Directory that the Filesystem backend store
    # writes image data to
    #filesystem_store_datadir=/var/lib/glance/images/
    filesystem_store_datadir=/mnt/glance/
  9. Now, restart the glance-api service and any newly uploaded image through glance will be located under /mnt/glance on your controller.
    [root@TS-Training-OS-01 ~]# service openstack-glance-api restart

OpenStack – Icehouse Deployment Via Packstack

Today I decided to set-up a new OpenStack environment to run some tests and provide a training on it.
This blog post will cover “OpenStack – Icehouse Deployment Via Packstack”.

There are several ways to deploy a OpenStack environment with a single-node or a multi-node:

  1. Packstack – Quickest and easiest way to deploy a single-node or multi-node OpenStack lab on any RHEL distribution
  2. Devstack – Mainly used for development, requires more time then packstack
  3. Juju – Very time-consuming setup but very stable, Ubuntu only.
  4. The manual way, most time-consuming, recommended for production environments.
    Detail can be found here.

In my scenario I deployed 4 CentOS 6.4 64bit VMs with each having 2x2vCPUs, 4GB memory, 2x NIC cards (one for MGMT, one for iSCSI – no MPIO).
After you completed the CentOS 6.4 installation, follow the steps below:

  1. Install the RDO repositories
    [root@TS-Training-OS-01 ~]# sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
  2. Install openstack-packstack
    [root@TS-Training-OS-01 ~]# yum install -y openstack-packstack
  3. Verify that packstack has successfully been installed
    [root@TS-Training-OS-01 ~]# which packstack
    /usr/bin/packstack
  4. Use packstack to deploy OpenStack. Syntax - $ packstack --install-hosts=Controller_Address,Node_addresses
    [root@TS-Training-OS-01 ~# packstack --install-hosts=10.18.48.50,10.18.48.51,10.18.48.52,10.18.48.53
    In my scenario 10.18.48.50 will be used as the controller and .51, .52 and .53 will be used as NOVA compute nodes.
  5. Once the installation has been completed, you'll see the following output on your CLI:
     **** Installation completed successfully ******
    
    Additional information:
     * A new answerfile was created in: /root/packstack-answers-20140908-134351.txt
     * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be a problem for some OpenStack components.
     * File /root/keystonerc_admin has been created on OpenStack client host 10.18.48.50. To use the command line tools you need to source the file.
     * To access the OpenStack Dashboard browse to http://10.18.48.50/dashboard .
    Please, find your login credentials stored in the keystonerc_admin in your home directory.
     * To use Nagios, browse to http://10.18.48.50/nagios username: nagiosadmin, password: some_random_password
     * Because of the kernel update the host 10.18.48.50 requires reboot.
     * Because of the kernel update the host 10.18.48.51 requires reboot.
     * Because of the kernel update the host 10.18.48.52 requires reboot.
     * Because of the kernel update the host 10.18.48.53 requires reboot.
     * The installation log file is available at: /var/tmp/packstack/20140908-134351-aEbkHs/openstack-setup.log
     * The generated manifests are available at: /var/tmp/packstack/20140908-134351-aEbkHs/manifests
  6. Reboot all hosts to apply the kernel updates.
  7. You can now access Horizon (dashboard) via the IP-address of your controller node, in this scenario http://10.18.48.50/dashboard
  8. If you prefer the CLI, SSH to the controller node and source the keystonerc_admin file to become keystone_admin.
     [root@TS-Training-OS-01 ~]# source keystonerc_admin
     [root@TS-Training-OS-01 ~(keystone_admin)]# You are KEYSTONE_ADMIN now!!!
    
    

The initial install of OpenStack via packstack has been completed and you can start to configure it via CLI or using Horizon.

Crucial Data In Your VMware ESXi 5 Log Files

As an Escalation Engineer, part of my daily work is reviewing log files of various systems and vendors. In my first blog post, I would like to show which VMware ESXi 5 log files are most relevant for troubleshooting storage and networking related problems.

All current ESXi 5 logs are located under /var/log and as they rotate, they’ll be available under /scratch/logs

 

vmware_esxi_logs

/var/log/vmkernel.log:

  • VMkernel related activities, such as:
    • Rescan and unmount of storage devices and datastores
    • Discovery of new storage like iSCSI and FCP LUNs
    • Networking (vmnic and vmks connectivity)

/var/log/vmkwarning.log:

  • Extracted warning and alert messages from the vmkernel.log

/var/log/hostd.log::

  • Logs related to the host management service
  • SDK connections
  • vCenter tasks and events
  • Connectivity to vpxa service, which is the vCenter agent on the ESXi server

/var/log/vobd.log:

  • VMkernel observations
  • Useful for network and performance issues

Also, if you have a VM which is affected in particular, it might be worth looking into the vmware.log which is stored with the Virtual Machine. You can find the log under /vmfs/volumes/datastore_name/VM_name/vmware.log.

For the location of ESXi 3.5 and 4.x log files, can be found here.

Silicon Valley VMUG – Double-Take & VSAN

Today, I attended my first Silicon Valley VMUG at the Biltmore Hotel and Suites in San Jose, CA. Vision Solutions presented their software DoubleTake which provides real-time high availability. Joe Cook, Senior Technical Marketing Manager at VMware, provided an overview of VSAN and its requirements.

VMUG_Silicon_Valley

I took a couple of notes for both presentations and summarized the most important points below:

Double-Take Availability

  • Allow migration P2V, V2P, P2P, V2V cross-hypervisor
  • Provides HW and Application independent failover
  • Monitors availability and provides alerting functionality by SNMP and Email
  • Supports VMware 5.0 and 5.1, as well as Microsoft Hyper-V Server and Role 2008 R2 and 2012
  • Full server migration and failover only available for Windows. Linux version will be available in Q4.

Double-Take Replication

  • Uses byte-level replication which continuously looks out for changes and transfers them
  • Either real-time or scheduled
  • Replication can be throttled

Double-Take Move

  • Provides file and folder migration
  • Does NOT support mounted file shares. Disk needs to show as a local drive

 

VMware Virtual SAN (VSAN) by Joe Cook

Hardware requirements:

  • Any Server on the VMware Compatibility Guide
  • At least 1 of each
    • SAS/SATA/PCIe SSD
    • SAS/NL-SAS/SATA HDD
  • 1Gb/10Gb NIC
  • SAS/SATA Controllers (RAID Controllers must work in “pass-through” or RAID0
  • 4GB to 8GB (preferred) USB, SD Cards

Implementation requirements:

  • Minimum of 3 hosts in a cluster configuration
  • All 3 host must contribute storage
  • vSphere 5.5 U1 or later
  • Maximum of 32 hosts
  • Locally attached disks
    • Magnetic disks (HDD)
    • Flash-based devices (SSD)
  • 1Gb or 10Gb (preferred) Ethernet connectivity

Virtual SAN Datastore

  • Distributed datastore capacity, aggregating disk groups found across multiple hosts within the same vSphere cluster
  • Total capacity is based on magnetic disks (HDDs) only.
  • Flash based devices (SSDs) are dedicated to VSAN’s caching layer

Virtual SAN Network

  • RequiredadedicatedVMkernel interface for Virtual SAN traffic
    • Used for intra-cluster communication and data replication
  • Standard and Distributed vSwitches are supported
  • NIC teaming – used for availability not for bandwidth
  • Layer 2 Multicast must be enabled on physical switches

Virtual SAN Scalable Architecture

  • VSAN provides scale up and scale out architecture
    • HDDs are used for capacity
    • SSDs are used for performance
    • Disk Groups are used for performance and capacity
    • Nodes are used for compute capacity

Additional information

  • VSAN is a cluster level feature like DRS and HA
  • VSAN will be deployed, configured and manages through the vSphere Web Client only
  • Hands-on labs are available here