vCAC – Adding Domain Selection to IaaS Blueprints

So I thought I would take a break from blogging about vCOps and go through some frequently asked questions for vCloud Automation Center  (vCAC). vCAC 6 is shaping up to be a great product and with the power of integrated vCO there is almost nothing 🙂 it can’t do. Before I start going into some of the vCO integration and workflows I though I would address questions I have been fielding lately about OOTB functionality.

The Problem?

How do I allow users to select an AD Domain to join a Windows Blueprint too with creating multiple Blueprints for each domain?

The Answer

There are a few different ways to solve this issue. However I find the easiest is to leverage the CloneSpec custom property with a property dictionary to make it user friendly. This will allow you to use just one blueprint and have the user select the domain from a drop-down list avoiding blueprint sprawl.
More information on Custom Properties can be found in the Custom Property Reference.

So how is it done?

 

The Solution

1. So the first thing that needs to be done is create vCenter Customization Specifications for each of the domains you want the user to request from. There is nothing special about this step simply create a customization per domain and put the name of each domain in the name of customization so it is clear to the end user.

vCenterCustomisations

2. Create your blueprint in vCAC under Infrastructure. This should follow a standard vSphere Windows Clone, however there are few things you want to do a few things different from normal. First leave the Customization spec: field blank as this will be overwritten by the user. Next navigate to the Properties tab and create a new property called CloneSpec and select the Prompt User checkbox. Save the Blueprint.

CloneSpec

3. Open the Property Dictionary under Blueprints. Create a new Property Definition named CloneSpec. Enter an appropriate display name and description as well as if the field should be mandatory. Save the property, then select Edit under Propriety Attributes.
CloneSpecPD

4. Create a new Property Attribute and select ValueList as the Type. Enter a Name (the name really doesn’t matter) and for the values enter the name of each vCenter Customization separated by a comma. Save and close the List and Property Dictionary.

ValueList

5. Your done! Publish the Blueprint to the Catalog and open a request and bam! You can now allow users to select which domain the VM will join to using the OOTB IaaS functionality.

IaaSWindowsADSelectionForm

In my next few posts I will cover some more OOTB vCAC concepts as well as using vCO to provide some DaaS (Desktop as a Service) with Horizon View.

Stay tuned…..

Chris Slater out.

Tuning vCOps for your environment – Part 5 – Capacity Management Tuning

In this next post I will finish off discussing Capacity Management and give all my recommended Capacity Management Policy settings for Production Server and VDI environments as well as the rational behind each major policy decision. This post continues on from my last post which covers the problems with capacity management reporting if policies are not set appropriately, as well as the differences between the allocation and demand models. A link to my last post on Capacity Management Models can be found here.

For this post we will focus on Policy Section 3 of a vCOps Policy. These capacity management recommendations can be made to a specific policy that you have created, however they should be generic enough to use in your Default Policy to cover your entire production environment. I will call out when an option might have a different value in a dev/test or VDI type environment and as such a specific policy might be created for that use case.

3a – Capacity and Time Remaining

Spikes and Peaks
One of the first things to check in your policy is making sure the Spikes and Peaks checkbox is checked. By selecting this option the VM Effective Demand is adjusted taking into account stress to size the average VM more towards peaks then just a total average. This is an important setting to check as this box may be unchecked in your default policy depending on vCOps upgrades from previous versions.

SpikesandPeaks

As stated above the Spikes and Peaks checkbox adjusts the Effective Demand of the “Average VM”. The limited demand reflects average demand for resources so you can compare the difference Spikes and Peaks are making to the average calculation. If the Spikes and Peaks checkbox is unchecked this values will be the same.
LimitedvsEffectiveDemand

Q: So you have recommended checking the Spikes and Peaks checkbox to create a more conservative Average VM size. Are there situations where I shouldn’t check this box?
A: Yes there a several situations where you would want to leave this checkbox unchecked. These include Development and VDI environments. VDI is a good example where you would not want this box checked because you would not expect all desktops to be busy at the same time. As a result checking this box in a VDI environment can result in a conservative Average VM size that is too large as as a result you do not reach your target ROI in your VDI environment.

Physical vs. Usable Capacity:
When deciding the “Capacity Remaining based on” you have two options. Physical Capacity or Usable Capacity. This option is used to determine how much “Capacity” an ESXi host provides.
In almost all circumstances I would recommend Usable Capacity.
Q: Why?
A: When working out the capcity of the host rarely would you simply want to use the physical capacity of the host. You need to take buffers into account such as HA, CPU and Memory buffers. Not many customer would want to run an ESXi host to 100% CPU and Memory Usage would they?
This section will also be discussed in more detail in 3b – Usable Capacity.

Demand or Allocation by Compute Resource:
This section has been discussed in detail in my last post. However see below for my summary recommendations on Demand vs. Allocation in Production Server environments.
DemandvsAllocation SummaryRecommendations

3b – Usable Capacity

Now that I have just recommend using Usable Capacity we need to define what Usable Capacity actually is. As you would imagine useable capacity is simply applying buffers and overheads to physical capacity for a variety of reasons.

Reserving resources for HA:
The first box you want to check is Use High Availability configuration, and reduce capacity. This one is important and should nearly always be checked. It is often something people over look when using the cluster summary check method, as they forget that they have setup vSphere HA for N-1 or N-2. As a result you cannot fill all your hosts to 100% as you need to plan for host failure. This checkbox does that planning for you.

Applying Resource Buffers (CPU, Memory, etc..):
With a HA buffer now applied you will also want to add on some CPU and Memory buffers. These buffers are important because as I stated earlier you don’t want to run your host at 100% Memory utilization for example or swapping will start to occur. Here are some reasons to add buffers for certain scenarios:

  • Keeping resources below 90% utilisation (host CPU and Memory)
  • Adding a capacity buffer for unexpected projects (this always happens)
  • Adding a CPU buffer for interactive or peaky (sub-hour spikes) server/VDI workloads (this is particularly importing when using the Demand only model)

UsableCapacity

 

3c – Usage Calculation

Last but not least we have the Usage Calculation screen. The first part of this section requires us to set the working week.

The working Week:
By default the working week is set to “All hours”. This needs to be changed to reflect the business periods of your environment. In most cases you would uncheck Saturday and Sunday and have a 9-5 working day. This step is important as it helps judge the size of the “Average VM” more accurately by not having quite periods skew the results. Some organizations may have busier periods at night rather than during the day, if this is the case simply set the observation window accordingly.

Allocation Overcommit Ratios:
As discussed in 3a – Capacity and Time Remaining Allocation Overcommit Ratios are vital when using the Allocation based model. For CPU for Example these levels effect the vCPU to vCPU target ratios, for Memory they effect they level of target Memory Overcommit.
Q: What should they be set to?
A: Well as stated earlier the CPU level overcommit depend on your organizational policy and your hardware type. This can be anywhere from 1:1 to 10:1.
Memory overcommit is far more straight forward. In a production server environment this should generally be set to 0%.
That’s right 0%,  this is because is most production environments Large Pages are preventing Transparent Page Sharing from providing memory de-duplication benefits.  This is well explained in KB 1021095. As a result we should be generally erring on the side of performance not consolidation and memory overcommitment is becoming more a thing of the past.

UsageCalculation

Final Word:

For my final word on Capacity Management here is a summary of my recommendations:

  • For Production environments make sure the Spikes and Peaks checkbox is checked
  • Use “Usable Capacity” not “Physical Capacity”
  • Use and Demand and Allocation model that works for your environment. After making changes check the Average VM sizing to see how the changes have effected your environment.
  • Ensure the Use High Availability configuration, and reduce capacity checkbox is checked
  • Usable Capacity buffers are important, don’t be afraid to increase the default percentages!
  • Ensure a Work Week is set
  • Set your Allocation Overcommit Ratios appropriately. In most server environments the level of Memory overcommit hould be 0%.

Tuning vCOps for your environment – Part 4 – Capacity Management Models

In the next part of the DefinedBySoftware vCenter Operations Manger series we will be going through the complicated but important topic of Capacity Management of vSphere environments. This part of the series will focus on Capacity Management theory for vCOps, with the next post containing my recommended policy settings for accurate capacity management reports.

First the problem….
One of the main features of vCOps is its ability to assist with capacity management of your virtual infrastructure. This is of great benefit to the virtual infrastructure admin, as in my opinion capacity management is something that a lot of organizations do poorly. However one of the main issues I see customers face with vCOps is that the capacity management polices are not configured appropriately for their environment and misleading capacity reports are given, and therefore the feature is ignored.

So Chris can you sum up this problem in picture format?
Yes this can be easily depicted in the mighty slatchlab. As you can see from the picture below I have capacity remaining for 44 more Virtual Machines in this cluster. So whats wrong with that???
CapReminingIssue

There are may traditional ways of managing capacity in a vSphere environment such as resource tracking spreadsheets, traditional external capacity management tools and my favorite the cluster summary check.
The cluster summary check is a capacity check that some vSphere admins use when their manager asks how much capacity is left is a cluster. It goes something like this.

Manager: “Hey Chris how much capacity is left is the slatchprod cluster?”
Admin: “Let me check”.
Inside Admin’s head: “Ok hosts are around 55% memory usage and 3% CPU usage. There are 11 powered on VMs, so about another 8 or so should be the max”
Admin: “We are at around 55% so another 8 VMs and we will be out of resources.”
Manager: “Well vCOps reports I have 44 VMs left so what is going on?”
Admin: “Let me get back to you”
ClusterCheck

It is obvious that that the cluster summary check is flawed (for many reasons), however you can also see that the vCOps remaining capacity estimate seems very optimistic as well. Now in this case I adjusted the capacity policy to give more of a worst case, however it shows the important of tuning vCOps to give the right data.

The solution:
The solution is easier said that done. Change the vCOps Policy to reflect your environment taking into account some of my basic recommendations. After which you get a report like the one below showing that only 2.3 VMs worth of capacity remain which is far more realistic in my small environment.

CapRemainingResolved

So how did I do it?
In my next post I will detail my recommended capacity management policy settings for production server environments. However before I just blurt out the answer I need to discuss how I came up with the policy. This will help you come up with your own policy for your environment.

Demand vs. Allocation Models

Before I get into all the vCOps Capacity Management settings the Demand vs. Allocation models need to be discussed as it has such a massive impact in determining the size of the “Average VM” which is used for capacity planning. Below is a screenshot from a vCOps Policy where all this discussion is relevant (3a Capacity and Time remaining). The demand vs allocation options give a sea of choices so lets go through which boxes are the right ones to check?
vCOpsSettingDemandvsAllocation

Before we go into the individual infrastructure items (CPU, Memory, Desk I/O, etc…)  lets discuss Demand vs Allocation over all. Thanks to Ben Todd for this great slide below.

AllocationvsDemand

The image above gives a great list of pro’s and con’s for allocation vs demand, and although using both is appropriate in some cases, in others it may not be. Lets use Container CPU for example (containers are the most relevant column as it effects an ESXi Cluster which is the object that is selected commonly for capacity management).

Demand:
CPU Demand is a derived metric that is made up of multiple sub metrics (in the case CPU usage, CPU Ready, etc..) it is used to estimate the amount of CPU an object actually wants to consume. Although demand and usage are often identical it is possible to demand to exceed usage, this would indicate resource contention. Demand is useful way to manage capacity with as a Virtual Machine will rarely use all the CPU has been configured with which is the basic principle of overcommitment. You will also find that demand usually matches the Usage % metric that is observed inside vCenter.
ClusterCPUDemand

Allocation:
Q: So if Demand is so great for CPU why use allocation at all?
A: There may be situations where you want to control the vCPU to pCPU ratio on your Clusters.
So if you haven’t guessed already the Allocation model in CPU effects the amount of vCPUs that can be allocated to pCPU’s. It is important to note that the vCPU to pCPU ratio is set in section 3c Usage Calculations. Failure to set this setting to the correctly can lead to over optimistic or conservative capacity estimates. However there may be situations where you would want to manage CPU capacity by allocation and this model would be preferred. For example a Business Critical Applications cluster where you want to ensure a 1:1 ratio of vCPU to pCPU for performance.
AllocationOvercommitRatios
Q: So what should my CPU Allocation Overcommitment Ratio be set to?
A: Well that depend on your organizational policy, CPU type and speed, types of applications, etc…
In short it is often hard to set this value for production environments. So if you are in doubt what the ratio should be ensure that the CPU Container Allocation model is unchecked and simply rely on CPU Demand. Now you may be thinking “But what about workload spikes and a safety buffer?” That will be discussed in the next post so relax.

What about Memory?
So I have discussed CPU models but what about Memory, should that be using Demand as well?
The short answer is Not usually.
Memory Demand
is based on a variety of metrics, however the main metric is Active Memory. Active Memory is often far lower than Consumed Memory. This is due to a variety of factors and a great explanation of Active Memory can be found here. This can be mostly solved by ‘right-sizing’ VM’s however this is easier said that done. Therefore when capacity planning by Memory demand the result might be over-optimistic and not suitable for production environments in world of Large Memory pages and Transparent Page sharing only taking effect at 94% Host memory utilization. I will release another blog post on how we use Memory Demand for VM right sizing after applying some additional tuning.
So for my recommendation ensure that for Memory you use the Allocation Model and set the overallocation appropriately as would be done for CPU.

Ok that’s great what about Disk Space, Disk I/O and Network I/O?
Simply I would say disable these all together, however use your judgement.
Disk space usually does not work as a capacity management metric because Datastores and LUNs are created on Demand by your SAN administrator (unless you pre-present all your Storage in advance). As such vCOps doesn’t know about how much capacity the actual SAN has left and this resource will often be the most constraining if left enabled.

Disk I/O and Network I/O can be left enabled, however I rarely find these are constraining factors when determining how many VMs to place on a cluster. Once again these are resources for which performance or capacity is externally managed and is usually not the main focus of vSphere Cluster capacity management.

That’s all for now for now folks. In my next post I will go through all my capacity management policy recommendations (with the exclusion of Demand vs Allocation Model as this was just covered).

Tuning vCOps for your environment – Part 3 – Creating and Applying Policies

In my last Post Part 2 – Badge Tuning I discussed the importance of tuning badges via polices to accurately reflect different workloads in your environment. In this post I will discuss the importance of Intelligent Operations Groups and creating your own polices, and finally the role of the Default Policy.

Disclaimer:
Now a few people might be stating this is 101 stuff whats the go? However the reason I am blogging on this is I see a lot of customers not taking advantage of this functionality and as a result having an environment which is noisy and useless.

The Default Policy (The King of Policies):
De-fault my two favorite words in the English language. The Default Policy is exactly that, is the policy that is applied by default to all vCOps objects where another policy is not explicitly applied. As a result OOTB the default policy is the policy that is applying to all vCOps objects until you specify a different one, and even in environments where numerous polices are created for different environments, functions and use cases the default policy generally ends up staying the dominant one.
Because of this tuning the Default Policy is probably the most important policy to get right, after which other polices can be cloned from it.

There are three questions relating to the Default Policy I get a bit so I will bring that up too.
Q: Why even have other polices when you can just tune the heck out of the default one?
A: Easy. It is almost impossible to have one policy that fits your whole environment. For example a Production Alerting and Capacity Management policy is going to be completely different to a Dev/Test Policy.

Q: How do I know what Policy is applying to an object?
A: Simply select the Parent object in the navigation window and Select Environment -> Members.
AppliedPolicies

Q: What should I set to my Default Policy to?
A: As a bonus I will provide my most commonly used Default Policy recommendations in my final post of the vCOps series.

Creating Intelligent Operations Groups:
Before we go crazy creating policies for different reasons one of the first steps is to create groups for the polices to apply to. Policies can only be applied to groups so even you want to apply a policy to an entire cluster it needs to be in a group. It is also useful to know that all vCenter Folders are automatically created as groups, so these can be leveraged if your great at organizing your VMs inside vCenter. However my complaint with Folders in vCenter is a VM can only be in one folder, where the same is not true for a group.

I’m not going to go into the step by step procedure for creating groups as this is well covered in the vCOps documentation (Pg 76). However I will provide my tips when creating groups:

  • Dynamic Groups are preferred as they auto update as VMs are added into your environment. However they usually rely on naming standards or something common to link objects together with. vCenter Infrastructure Navigator (VIN) can also be used to automatically discover applications and populate dynamic groups accordingly.
  • Use preview button (shown in pic) to check that you rule logic is sound before creating the group.
  • If you have objects in multiple groups record in the name or description which one has the policy applied. A Object can only have one policed applied.
  • Groups also provide a great way of mini-dashbaords for functions or applications without the need of the Custom UI.

DynamicGroups

Creating and Applying Polices:
Again I’m not going to go into the step by step procedure for Policy creation as this is well covered in the vCOps documentation (Pg 86). However again here is my tips on the topic:

  • Clone the Policy off your Golden Default Policy (Default Policy recommendations will be made at the end of the series) and only change the badges or policies that are needed to be changed. Don’t start from a scratch everytime.
  • Don’t create policies for the sake of it. Eg. Every Cluster should have its own policy. This creates unnecessary overhead if all policies need to be updated for a common change.
  • Label or record in an external document what changes have been made from the default policy. Descriptions like “Exchange Policy” don’t assist others in knowing what you have changed.

Q: If I have a VM in two groups and each has a policy assigned how do I know what Policy will take effect?
A: When two or more policies are applied to an object the policy order determines which policy is applied. The policy order is set by dragging around the polices in the manage policies Window as shown below.

PolicyOrder

Final Word:

  • Use policies where appropriate applied to groups that you create. This is the heart of Tuning vCOps.
  • Understand which Policies are applying on which objects.
  • Get the Default Policy right! (Help will come for that later)

In my next Post I will begin discussing the important subject of Capacity Management Models and Tuning vCOps to give capacity assessments based on your environment and your designs.

Tuning vCOps for your environment – Part 2 – Badge Tuning

Welcome to Part 2 on Tuning vCOps for your Environment – Badge Tuning. For those who missed my first post on Alerts it can be found Here.

For Part 2 we are going to focus on adjusting badge thresholds in vCOps and why it is necessary for noise free vCOps environment.
In case you are a little rusty, vCOps is essentially broken down into 3 Major badges and 8 Minor badges. The Minor badges are used to make up the score of the Major badges.

They are broken down as follows:

  • Health
    • Workload
    • Anomalies
    • Faults
  • Risk
    • Time Remaining
    • Capacity Remaining
    • Stress
  • Efficiency
    • Reclaimable Waste
    • Density

MajorMinorBadges

By default the various Major and Minor badges change state at certain levels. Eg the Workload badge for VMs will be Green 0 – 80, Yellow at 80-89, Amber at 90-94 and Red at 95-100. It is important to note that as the badge progresses through the varies alert levels a badge alert is raised. Eg a VM with 100% Workload due to high CPU will generate 3 badge alerts. Now if this is abnormal behavior it is probably a good thing that alerts are generated because it probably warrants your attention. However what is this is normal behavior and you don’t care? Well then it is just noise that distracts you from other potential issues that may be occurring.

So Chris how do I deal with this tromboner?
Well let me give you a classic scenario and then the steps needed to resolve the situation.

Scenario:
Every Datastore Object is generating a Workload Alert and yellow or amber Workload badge. As per the example screenshots below.

DiskSpaceAlert
DatastoreOps

Why is this occurring?:
This is a common situation for Datastores, as the Workload Badge for Datastores is composed of the derived attributes Disk Space and Disk I/O. The reason this is so common is most Designers fill their VMFS Datastores to 80% or 90% utilization before marking the Datastore as full.

As the Disk Space Metric in my example is at 89% it has tripped the Yellow (Warning) badge for Workload (as Workload will be set by the most constraining resource). This in turn has generated an Alert which I now have to pay attention to.

What should we not do?:
You may be thinking “I know, just disable the Alert like as you explained in Part 1“. However in this case that is not appropriate. The reason is as I explained in Part 1 disabling the Alert does not effect the Badge state. As such simply disabling the alert would keep Workload as a Critical Badge,  and therefore keep effecting my heath score of the Datastore and of its parent objects. As such my Health heatmap for my Cluster, vCenter or World will still contain a sea of red.

So why not simply apply the built in vCOps Policy “Ignore these Objects” to all Datastores?
Although this would work to a degree it would also remove all other badges on the Datastores that we still care about. Anomalies for example are incredibly useful as well as Faults. Applying this policy would disable those unnecessarily.

What should we actually do?:
The answer is – Create a Datastore Group -> Create an All Datastores Policy with the Workload Badge adjusted -> Apply the new Policy to the group.

I will cover creating an applying polices in detail in my next post however in essence a new Policy should be created and applied to reflect the environment.

The new Policy can have the Warning and Immediate thresholds disabled all together (simply left click on the slider box), and the Critical threshold still enabled at 95%. This will ensure if someone over-provisions a Datastore beyond normal policy it is alerted on.
NewDatastorePolicyBadges

After this has been applied you can see the result ->
FixedDatastoreWorkload

The Workload Badge is now Green and the Alert has disappeared automatically.

Final Word:
Although this example was a Datastore there may be dozens of others in your environment that need a similar adjustment. Eg. VMs that constantly run at 100% CPU, Mail Servers with very high disk I/O, etc… In these cases they probably warrant a specific policy which disables (or better yet adjusts) badges as your environment dictates.

In my next post we will discuss Policy and Intelligent Operations Group creation to simplify and automate applying polices for these sort of scenarios.

Tuning vCOps for your environment – Part 1 – Alert Sprawl

As my first post to DefinedBySoftware.com I thought I would post a multi-part series on tuning and using vCenter Operations Manager aka vCOps.

vCOps in my personal opinion in a fantastic health and capacity management tool for monitoring virtualised environments. vCOps collects and analyses information from multiple data sources and uses advanced analytics algorithms to learn and recognise the normal behaviour of every resource it monitors. It also provides capacity planning and reporting as well as right sizing recommendations for undersized and oversized VMs.

Although this tool may sound too good to be true, it is often let down in my experience by limited understanding and not being tuned for a customers own environment.  As such the goal of these next few posts will be to provide some quick tips and advanced vCOps tuning knowledge that can help with some common problems such as alert spam and overoptimistic capacity reports.

Alert Management ->

One piece of feedback I often receive with vCOps is that after deployment to many alerts are active in both the vSphere UI and the Custom UI. Such a high number of alerts can be daunting and as such they are often all ignored. Badge alerts usually make up the majority of alerts for example Workload, Capacity Remaining and Time Remaining. One of the main reasons for this is badge alerts can not be simply cleared, the badge level state needs to be adjusted in the appropriate policy. We will discuss this is a later post as policy tuning is the heart of vCOps Tuning and how that policy is then applied to the appropriate groups and objects.

For now lets discuss some quick wins that can be made to reduce the number of active alerts.
Alerts are generated when a badge changes from a healthy state (green) to a lower state (yellow, amber or red) or a fault is generated. One thing that many people would notice is that OOTB many alerts are generated for Time and Capacity Remaining. These Risk alerts (Capacity Management) often fill the Alerts Window with hundreds or thousands of warnings of a particular resource running out (we will cover capacity management and tuning in a later post).
One of my main quick wins is to disable the Time Remaining and Capacity Remaining alerts all together in the Default Policy. The rational behind this is simple, capacity management is a task I perform daily or weekly as a scheduled activity and I do not need to be alerted on it. Leave the alerts to things I need to focus on now, not in 1 months time.

As such I have provided below my recommendations for a vCOps Policy both default and custom made under the Configure Alerts section. As you can see Time Remaining and Capacity Remaining have been unchecked. It is also important to note that unchecking an alert does not prevent the badges from degrading to lower states and therefore lowering your Cap or Time remaining score on objects. It simply stops that object from generating an alert related to that badge (we will discuss badge tuning in a later post).

Default Policy Alert Recommendations
  Infrastructure
Objects
VMs Groups
Workload Checked Checked Checked
Anomalies Checked Checked Checked
Time Remaining      
Capacity Remaining      
Stress Checked Checked Checked
Waste      
Density      
Faults Checked Checked Checked

You will notice above that Workload, Anomalies and Faults have been left checked. As these minor badges directly effect the operational health of an object and should be alerted on as this means attention should be paid on this object now. I have also left stress enabled, however I see this as optional depending on how tuned the stress polices are for your environment.

Stay tuned for future posts on Policy configuration, Capacity Management configuration, Intelligent Group creation and much more.

Java Version 7 Update 51 and the vCO Client

You might have noticed that Oracle recently released Java Version 7 Update 51. Like most recent Java updates, this one is classified Critical and includes patches to close some 36 security vulnerabilities as well as some changes in functionality. As nearly all of these vulnerabilities are remotely exploitable, it’s highly recommended that you apply this update. But beware……

If you have already applied this update you might have discovered that it breaks the vCenter Orchestrator Client. On attempting to run the vCO Client you are presented with this not very helpful message:

vcoerror

 

While you could roll back to an earlier version of Java to get around this, it’s not an ideal solution as Java vulnerabilities are commonly targeted by malware authors and other miscreants. Running on the latest version of Java is always a key recommendation in terms of security so I thought I would dig a bit deeper into this problem to find a better solution than running an insecure version of Java.

Clicking on the Details button of the above dialog provided the following information:

vcoerrordetail

 

From that we know that Java is unhappy about a Permissions manifest attribute not appearing where it would like to see it. So what changed between Java versions that caused it to refuse to run the app following the update? The release notes for Java Version 7 Update 51 at http://www.oracle.com/technetwork/java/javase/7u51-relnotes-2085002.html mention the following change:

  • Require Permissions Attribute for High Security Setting

This sounds promising. Java has defaulted to a High security setting for all applets and Web Start applications since Update 11 so, by default, this is the security context that will be used for the vCO Client. With Update 51 they have also mandated that all applets running in the High security context also have a Permissions attribute. As the vCO client was released prior to Update 51 of Java it doesn’t have this attribute so Java refuses to let it run.

So now that we’ve found the problem how do we fix it? As I said earlier you could roll back to the earlier version of Java but that would leave you with a whole lot of Java vulnerabilities that you don’t want or need. Another option would be to set the default Java security to Medium instead of High. This is also not ideal as it would result in applets from all web sites running in the Medium security context. As you more than likely don’t control these sites or the content they contain and push down to your browser, it’s best to maintain the default High security setting.

The fix is actually pretty simple. Oracle provides you with a method by which you can specify sites that are not subject to the enhanced Permissions attribute requirement. To exclude your vCO server from these enhanced checks, perform the following steps:

  • Open the Java Control Panel and go to the Security tab. At the bottom of the dialog you will see the current Exception Site List. Click the Edit Site List button.
    JavaControlPanel

 

  • You should now see the Exception Site List dialog. Click the Add button.
    JavaExceptionList

 

  • In the exception entry dialog, enter the URL for your vCO Server. Note that the Java Exception list is protocol, address and port sensitive so you must specify https://<fqdn>:8281 for a default vCO installation.
    JavaAddExceptionSite

 

That’s it. Your vCO Client should now be working correctly and securely.