Category: Monitoring

CloudPhysics Review

March 5th 2014 at Virtualization Field Day 3

This is one company that I’ve been anxious to meet for a while now and I am really glad that I got a chance to at Virtualization Field Day 3. They have a very unique offering to companies in that they provide collective intelligence for IT data sets.

First up in the presentation was John Blumenthal (who happens to be the ex-director of storage at VMware) and during his short introduction to the company, the slide deck had an interesting yet straight to the point phrase of “Answers to primitive questions”, this is how the physics of things actually operate.

Progress for ROI

They believe that this must happen above the automation level and needs to be analyzed as quality of service (QOS) and service level agreements (SLA’s) that will in turn need to be ingested into the analytics of the product to determine the next course of action.

They also poised the age old question of “Can a private cloud match the operations of a large scale public offering?” Their answer is that all companies must be able to use the same techniques and methodologies. With CloudPhysics, this is done through an aggregation of data from all aspects of private cloud.

How is it deployed

The product is delivered as a SaaS Model through a vApp virtual appliance and is deployed into a customers vCenter through standard techniques. The product is lightweight and has a minimal impact on the resources it consumes.

How it works

The single vApp collects and scrubs the data from vCenter. Once the process is complete, the information is pushed to CloudPhysics for analysis. The information is stored in an anonymous format to meet any regulatory requirements for compliance such as PCI. They mentioned in the presentation that even if the information was looked at, there is nothing that distinguishes the data points to any particular company.

5 Minutes to Analytics Delivery

One of the interesting points that they made was that you can start analyzing your collections within 5 minutes, which is something I would like to test in the lab since many products out on the market take weeks to deliver tangible results.

Cloud Physics has a datacenter simulator that they run customers sets through to analyze and recommend changes in the environment. This service is included in the subscription service pricing.

Datacenter Simulator Analysis can be done on a per-vm basis and the cache performance analysis can determine the right amount of cache/tweaks that the customer will need to do to maximize the configuration.

The Datastore analysis tool has two primary functions:
1) It highlights the contention periods.
2) It determines which ones were affected and which ones caused the contention.

Predicting Potential Outages

The product identifies problem points through hardware analysis of the compute side as well as other data points that adversely affect the virtualization environment.

We were then shown a demo that was delivered by Raj Raja from product management.

Finding vSphere Operations Hazards

Applications are called “cards” and delivered in segments such as datastore performance, datastore space, memory, etc. Custom “decks” can be created that are simply a collocation of cards and metrics to review and analyze.

Another nice function is the ability to simulate what will occur if changes are made to the environment before you implement. This could result in reduced lab time to validate configurations for change controls.

Root cause analysis with Cloud Physics

Datastore focus is to correlate information from datastore activities, pull in data from backups, sDRS, etc) then form a relationship management structure to determine performance metrics.

I plan to have a follow up conversation with them to find out more detailed information and hopefully get this stood up in the VMbulletin lab for further analysis.

Rick

Treating your Virtual Infrastructure like a Physical Datacenter

Many companies have very strict rules on who can enter the datacenter and your VMware infrastructure should not be any different! Sure there are various levels of access in datacenters, I’ve been in quite a few and I sleep better at night knowing that the necessary precautions are taken to  secure these facilities. Jason Boche wrote a great article back in 2009 that describes in detail, how the security model works in vCenter and within the article, shows some of the pitfalls of providing too much access for what seems to be minimal rights.

What we need to understand is that the virtual infrastructure should be protected in the same manner that we employ in the physical world. If the wrong person got in as an administrator, it could spell disaster for your entire infrastructure/datacenter. The following recommendations may seem too stringent for some folks, but we should not simply give certain access just to “get things done”. VMware vCenter has some really nice predefined templates that you can use to minimize the attack footprint while allowing different levels of administrators the ability to do their job, but always double-check what permissions are granted with them.

Here are a few guidelines you should follow:

  • Only give a few people full administrator rights to your environment. Treat this like the Enterprise Admin account of your Active Directory forest (if you run Windows).
  • Service accounts and scripted tasks should use an account with the bare minimum they need to carry out their task. Don’t give them admin rights because its easy.
  • Do not allow SSH access as ROOT and turn off SSH when it is not needed. vCenter should be alerting you when this is turned on and should never be suppressed as this widens the attack footprint of your cluster nodes.
  • vCenter best practices are to provide access via groups instead of individual user access.
  • Be careful in granting access at the root level since this gives users access across multiple vCenter’s if you are running in linked mode.
  • Perform a monthly, quarterly audit to ensure security. Especially if you have more than 2 administrators in your environment.

Changes in the authentication mechanism

vSphere 5.1 – along with vCenter 5.1 comes with a requirement to run the environment with single sign-on (or SSO). The implementation has a few components to it and I’m sure many of you have already tested this out in the lab or have deployed some parts of it already. The following diagram is a depiction of the authentication process and how your credentials (tokens) are sent to endpoint.

VMware SSO Login

With the implementation of SSO, VMware is trying to reign in and protect authentication to the core components of the virtual infrastructure and doing your part with these access control guidelines, you will ensure a stable and secure virtual datacenter.

Rick

Xangati at VMworld 2011

I had a chance to stop by and talk to a few folks at the Xangati booth at VMworld and was particularly interested in their ability to “replay” past events in a ‘DVR’ type fashion for quick analysis of problematic areas. This is done with their Management Dashboards product line (XMD), which has two parts to it:

  • Xangati VI Dashboard – Tracking for both the physical and virtual components that include application insight for performance. This is done with their Health Engine as described here.
  • Xangati VDI Dashboard – This is the tracking mechanism for the virtual desktop infrastructure side of the environment.

The product is deployed as either a physical device or a virtual appliance in OVF format. This XMD virtual appliance uses what they call the Xangati Flow Summarizer (XFS) that pulls information from ESXi and utilizes NetFlow to monitor information on the physical network.

Steve Rodgers was able to give us a quick rundown of their product in this short video: