VMware VCSA – SSL Certificate Verification Failed

vcsa-error

Today, I changed the IP address of my VMware vCenter Server Appliance and was greeted by a SSL certificate verification failed error message after I tried to login to the vSphere web-client on the new IP address.

Apparently, the VCSA does not regenerate a new SSL certificate automatically after you changed the IP address and/or hostname.

In order to generate a new SSL certificate and automatically generate new certificates, if needed, follow the steps below:

  1.  Login to your VCSA Console (https://vcsa:5480)vcsa-console-login
  2. Go to the Admin-Tab, set Certificate regeneration enabled to Yes and Save setting.
    This will make sure a new SSL certificate will be generated every time you reboot your VCSA instance.certificate_regeneration
  3. Last, go to the System-Tab and Reboot the VCSA instance to get a new certificate generated.vcsa-rebootNote: Rebooting VCSA can take up to 10 minutes.
  4. Once the VCSA is back up and all services are started, you can login to the vSphere web-client. The SSL certificate error should no longer be present.

 

 

 

Cannot Connect To VMware Horizon Connection Server

The last couple of days, I’ve spend with deploying a small VDI setup in my home lab and get it up and running.
After painful hours of setting up a Active Directory, DHCP, DNS, VMware Horizon Connection Server and Horizon Composer Server, I thought I am all ready to get connect to my amazingly slow VDI desktops, sitting on 7200RPM spindles.

I tried connecting with my VMware Horizon Client to my Connection Server and I was welcomed by a non-impressive error message: Error: Unable to resolve server address: nodename nor servername provided, or not known

One would think, this message means one of the following:

  • Wrong IP/DNS name provided
  • Some service is not running
  • Something is seriously screwed up

After doing some research on the error, I was able to resolve it by unchecking: Use Secure Tunnel connection to desktop

 

Jumbo Frames – Do It Right

Configuring jumbo frames can be such a pain if it doesn’t get done properly. Over the last couple of years, I have seen many customers having mismatched MTUs due to improperly configured jumbo frames.. If it is done properly, jumbo frames can increase the overall network performance between your hosts and your storage array. It is recommendable to use it if you have 10GbE connection to your storage device. However, if it is not configured properly, jumbo frames quickly become your worst nightmare. I have seen it causing performance issues, drops of connection as well as ESXi hosts losing storage devices.

Now, we all know what kind of issues jumbo frames can cause as well as it is advisable to use it if you have a 10GbE connection to your storage device. However, let’s discuss some details about jumbo frames:

  • Larger than 1500 bytes
  • Many devices support up to 9216 bytes
    • Refer to your switch manual for the proper setting
  • Most people will refer to jumbo frame as a MTU 9000 bytes
  • It often causes a MTU mismatch due to misconfiguration

 

Below’s steps offer guidance on how to setup jumbo frame properly:

Note: I recommend to schedule a maintenance window for this change!

On your Cisco Switch:

Please take a look at this Cisco page which lists the syntax for most of their switches.
Once the switch ports have been configured properly, we can go ahead and change the networking settings on the storage device.

On Nimble OS 1.4.x:

  1. Go to Manage -> Array -> Edit Network Addresses
  2. Change the MTU of your data interfaces from 1500 to jumbo

nimble_1-4-X_jumbo

On Nimble OS 2.x:

  1. Go to Administration -> Network Configuration -> Active Settings -> Subnets
  2. Select your data subnet and click on edit. Change the MTU of your data interfaces from 1500 to jumbo.

NimbleOS2X_jumbo

 

On ESXi 5.x:

  1. Connect to your vCenter using the vSphere Client
  2. Go to Home -> Inventory -> Hosts and Clusters
  3. Select your ESXi host and click on Configuration -> NetworkingESXi_networking
  4. Click on Properties of the vSwitch which you want to configure for jumbo framesvSwitch_properties
  5. Select the vSwitch and click on Edit.
  6. Under “Advanced Properties”, change the MTU from 1500 to 9000 and click ok.vSwitch_Jumbo
  7. Next, select your vmkernel port and click on Edit.
  8. Under “NIC settings” you can change the MTU to 9000.vmk_jumbo
  9. Follow step 7 & 8 for all your vmkernel ports within this vSwitch.

After you changed the settings on your storage device, switch and ESXi host, log in to your ESXi host via SSH and run the following command to verify that jumbo frames are working from end to end:

vmkping -d -s 8972 -I vmkport_with_MTU_9000 storage_data_ip

If the ping succeeds, you’ve configured jumbo frames correctly.

Silicon Valley VMUG – Double-Take & VSAN

Today, I attended my first Silicon Valley VMUG at the Biltmore Hotel and Suites in San Jose, CA. Vision Solutions presented their software DoubleTake which provides real-time high availability. Joe Cook, Senior Technical Marketing Manager at VMware, provided an overview of VSAN and its requirements.

VMUG_Silicon_Valley

I took a couple of notes for both presentations and summarized the most important points below:

Double-Take Availability

  • Allow migration P2V, V2P, P2P, V2V cross-hypervisor
  • Provides HW and Application independent failover
  • Monitors availability and provides alerting functionality by SNMP and Email
  • Supports VMware 5.0 and 5.1, as well as Microsoft Hyper-V Server and Role 2008 R2 and 2012
  • Full server migration and failover only available for Windows. Linux version will be available in Q4.

Double-Take Replication

  • Uses byte-level replication which continuously looks out for changes and transfers them
  • Either real-time or scheduled
  • Replication can be throttled

Double-Take Move

  • Provides file and folder migration
  • Does NOT support mounted file shares. Disk needs to show as a local drive

 

VMware Virtual SAN (VSAN) by Joe Cook

Hardware requirements:

  • Any Server on the VMware Compatibility Guide
  • At least 1 of each
    • SAS/SATA/PCIe SSD
    • SAS/NL-SAS/SATA HDD
  • 1Gb/10Gb NIC
  • SAS/SATA Controllers (RAID Controllers must work in “pass-through” or RAID0
  • 4GB to 8GB (preferred) USB, SD Cards

Implementation requirements:

  • Minimum of 3 hosts in a cluster configuration
  • All 3 host must contribute storage
  • vSphere 5.5 U1 or later
  • Maximum of 32 hosts
  • Locally attached disks
    • Magnetic disks (HDD)
    • Flash-based devices (SSD)
  • 1Gb or 10Gb (preferred) Ethernet connectivity

Virtual SAN Datastore

  • Distributed datastore capacity, aggregating disk groups found across multiple hosts within the same vSphere cluster
  • Total capacity is based on magnetic disks (HDDs) only.
  • Flash based devices (SSDs) are dedicated to VSAN’s caching layer

Virtual SAN Network

  • RequiredadedicatedVMkernel interface for Virtual SAN traffic
    • Used for intra-cluster communication and data replication
  • Standard and Distributed vSwitches are supported
  • NIC teaming – used for availability not for bandwidth
  • Layer 2 Multicast must be enabled on physical switches

Virtual SAN Scalable Architecture

  • VSAN provides scale up and scale out architecture
    • HDDs are used for capacity
    • SSDs are used for performance
    • Disk Groups are used for performance and capacity
    • Nodes are used for compute capacity

Additional information

  • VSAN is a cluster level feature like DRS and HA
  • VSAN will be deployed, configured and manages through the vSphere Web Client only
  • Hands-on labs are available here