OpenStack – Icehouse Deployment Via Packstack

Today I decided to set-up a new OpenStack environment to run some tests and provide a training on it.
This blog post will cover “OpenStack – Icehouse Deployment Via Packstack”.

There are several ways to deploy a OpenStack environment with a single-node or a multi-node:

  1. Packstack – Quickest and easiest way to deploy a single-node or multi-node OpenStack lab on any RHEL distribution
  2. Devstack – Mainly used for development, requires more time then packstack
  3. Juju – Very time-consuming setup but very stable, Ubuntu only.
  4. The manual way, most time-consuming, recommended for production environments.
    Detail can be found here.

In my scenario I deployed 4 CentOS 6.4 64bit VMs with each having 2x2vCPUs, 4GB memory, 2x NIC cards (one for MGMT, one for iSCSI – no MPIO).
After you completed the CentOS 6.4 installation, follow the steps below:

  1. Install the RDO repositories
    [root@TS-Training-OS-01 ~]# sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
  2. Install openstack-packstack
    [root@TS-Training-OS-01 ~]# yum install -y openstack-packstack
  3. Verify that packstack has successfully been installed
    [root@TS-Training-OS-01 ~]# which packstack
    /usr/bin/packstack
  4. Use packstack to deploy OpenStack. Syntax - $ packstack --install-hosts=Controller_Address,Node_addresses
    [root@TS-Training-OS-01 ~# packstack --install-hosts=10.18.48.50,10.18.48.51,10.18.48.52,10.18.48.53
    In my scenario 10.18.48.50 will be used as the controller and .51, .52 and .53 will be used as NOVA compute nodes.
  5. Once the installation has been completed, you'll see the following output on your CLI:
     **** Installation completed successfully ******
    
    Additional information:
     * A new answerfile was created in: /root/packstack-answers-20140908-134351.txt
     * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be a problem for some OpenStack components.
     * File /root/keystonerc_admin has been created on OpenStack client host 10.18.48.50. To use the command line tools you need to source the file.
     * To access the OpenStack Dashboard browse to http://10.18.48.50/dashboard .
    Please, find your login credentials stored in the keystonerc_admin in your home directory.
     * To use Nagios, browse to http://10.18.48.50/nagios username: nagiosadmin, password: some_random_password
     * Because of the kernel update the host 10.18.48.50 requires reboot.
     * Because of the kernel update the host 10.18.48.51 requires reboot.
     * Because of the kernel update the host 10.18.48.52 requires reboot.
     * Because of the kernel update the host 10.18.48.53 requires reboot.
     * The installation log file is available at: /var/tmp/packstack/20140908-134351-aEbkHs/openstack-setup.log
     * The generated manifests are available at: /var/tmp/packstack/20140908-134351-aEbkHs/manifests
  6. Reboot all hosts to apply the kernel updates.
  7. You can now access Horizon (dashboard) via the IP-address of your controller node, in this scenario http://10.18.48.50/dashboard
  8. If you prefer the CLI, SSH to the controller node and source the keystonerc_admin file to become keystone_admin.
     [root@TS-Training-OS-01 ~]# source keystonerc_admin
     [root@TS-Training-OS-01 ~(keystone_admin)]# You are KEYSTONE_ADMIN now!!!
    
    

The initial install of OpenStack via packstack has been completed and you can start to configure it via CLI or using Horizon.

How I Got Started With Surfing

When I started this blog, I decided not to post just about virtualization but also about my personal life and the things which belong to it. As some of you might already know, I am a big fan of surfing and try to watch every tournament and practice whenever possible. I started surfing in October 2013 when my friend Jeremy Sallee, a UI/UX designer, introduced me to the sport. Since then I have been out in the water almost every weekend. My first session ever has been at Linda Mar Beach in Pacifica,CA with 6ft waves. I can tell you, that’s not the best condition for a noob who doesn’t know what he’s doing out there in the water.

Currently I ride a 8ft longboard with a 3-fin setup. The length of the board provides the stability of a typical longboard and the 3-fin setup allows easier and quicker turns like on a shortboard. After almost one year of surfing, I start to feel comfortable taking 5-6ft waves with my board. However, later this year, I plan a transition to a shorter, egg shaped, board.

Two weeks ago, I went out for another session at Linda Mar Beach in Pacifica, CA. The day started out perfect with some great breakfast and then a good 2.5h session in the water. I caught some nice 2-5ft waves and luckily, I recorded some of my experience with a GoPro.

 

https://vimeo.com/105464013

 

 

Crucial Data In Your VMware ESXi 5 Log Files

As an Escalation Engineer, part of my daily work is reviewing log files of various systems and vendors. In my first blog post, I would like to show which VMware ESXi 5 log files are most relevant for troubleshooting storage and networking related problems.

All current ESXi 5 logs are located under /var/log and as they rotate, they’ll be available under /scratch/logs

 

vmware_esxi_logs

/var/log/vmkernel.log:

  • VMkernel related activities, such as:
    • Rescan and unmount of storage devices and datastores
    • Discovery of new storage like iSCSI and FCP LUNs
    • Networking (vmnic and vmks connectivity)

/var/log/vmkwarning.log:

  • Extracted warning and alert messages from the vmkernel.log

/var/log/hostd.log::

  • Logs related to the host management service
  • SDK connections
  • vCenter tasks and events
  • Connectivity to vpxa service, which is the vCenter agent on the ESXi server

/var/log/vobd.log:

  • VMkernel observations
  • Useful for network and performance issues

Also, if you have a VM which is affected in particular, it might be worth looking into the vmware.log which is stored with the Virtual Machine. You can find the log under /vmfs/volumes/datastore_name/VM_name/vmware.log.

For the location of ESXi 3.5 and 4.x log files, can be found here.

InfoSight – Manage Case Creation Efficiently

Nimble Storage’s InfoSight changes how storage administrators manage and monitor their arrays in today’s environment. InfoSight includes many great features for free. Just to mention a few, the Assets tab provides a basic overview of your array’s storage and cache utilization as well as it’s configured pro-active health mechanisms. The Capacity tab shows the current usage as well as projected usage for the upcoming weeks.

Today, we’ll cover how to manage case creation through InfoSight’s Wellness tab.

By default Nimble Storage creates pro-actively cases for any condition on the array which causes an issue or could potentially cause headache for the storage administrator. However, not all pro-active cases might be important to you. If you want to get a list of all pro-active cases which are available on InfoSight, please follow the steps as shown below.

Note: Unchecking a condition equals disabling it.

Please login to Nimble Storage’s InfoSight and go to the Wellness Tab. .

InfoSight_Wellness

When clicking on Case Creation Options, you’ll get an overview of all case creation conditions and can either set a snooze period or disable them.

Note: Snooze Period indicates after how many days a new case for an existing problem will be created. If Snooze Period has been set to 1, this will create a new case every day until the actual problem has been resolved.

Manage Case Creation

 

Basically, InfoSight is a great all-in-one tool which even allows you manage Nimble Storage’s pro-active case creation more efficiently. My next post will be about common log files on your ESXi host and how you can use them to your benefit while troubleshooting.

Silicon Valley VMUG – Double-Take & VSAN

Today, I attended my first Silicon Valley VMUG at the Biltmore Hotel and Suites in San Jose, CA. Vision Solutions presented their software DoubleTake which provides real-time high availability. Joe Cook, Senior Technical Marketing Manager at VMware, provided an overview of VSAN and its requirements.

VMUG_Silicon_Valley

I took a couple of notes for both presentations and summarized the most important points below:

Double-Take Availability

  • Allow migration P2V, V2P, P2P, V2V cross-hypervisor
  • Provides HW and Application independent failover
  • Monitors availability and provides alerting functionality by SNMP and Email
  • Supports VMware 5.0 and 5.1, as well as Microsoft Hyper-V Server and Role 2008 R2 and 2012
  • Full server migration and failover only available for Windows. Linux version will be available in Q4.

Double-Take Replication

  • Uses byte-level replication which continuously looks out for changes and transfers them
  • Either real-time or scheduled
  • Replication can be throttled

Double-Take Move

  • Provides file and folder migration
  • Does NOT support mounted file shares. Disk needs to show as a local drive

 

VMware Virtual SAN (VSAN) by Joe Cook

Hardware requirements:

  • Any Server on the VMware Compatibility Guide
  • At least 1 of each
    • SAS/SATA/PCIe SSD
    • SAS/NL-SAS/SATA HDD
  • 1Gb/10Gb NIC
  • SAS/SATA Controllers (RAID Controllers must work in “pass-through” or RAID0
  • 4GB to 8GB (preferred) USB, SD Cards

Implementation requirements:

  • Minimum of 3 hosts in a cluster configuration
  • All 3 host must contribute storage
  • vSphere 5.5 U1 or later
  • Maximum of 32 hosts
  • Locally attached disks
    • Magnetic disks (HDD)
    • Flash-based devices (SSD)
  • 1Gb or 10Gb (preferred) Ethernet connectivity

Virtual SAN Datastore

  • Distributed datastore capacity, aggregating disk groups found across multiple hosts within the same vSphere cluster
  • Total capacity is based on magnetic disks (HDDs) only.
  • Flash based devices (SSDs) are dedicated to VSAN’s caching layer

Virtual SAN Network

  • RequiredadedicatedVMkernel interface for Virtual SAN traffic
    • Used for intra-cluster communication and data replication
  • Standard and Distributed vSwitches are supported
  • NIC teaming – used for availability not for bandwidth
  • Layer 2 Multicast must be enabled on physical switches

Virtual SAN Scalable Architecture

  • VSAN provides scale up and scale out architecture
    • HDDs are used for capacity
    • SSDs are used for performance
    • Disk Groups are used for performance and capacity
    • Nodes are used for compute capacity

Additional information

  • VSAN is a cluster level feature like DRS and HA
  • VSAN will be deployed, configured and manages through the vSphere Web Client only
  • Hands-on labs are available here