Showing posts with label virtualization. Show all posts
Showing posts with label virtualization. Show all posts

Tuesday, 24 May 2016

VMware Photon Platform, Notes from the Field - Entry Two

Time to Build The Cluster Services

Is it a bird,  plane or the Photon Platform? In the early days of Photon Platforms announcements, in my own mind, I had it pegged as a purpose built platform alternative to 'vSphere Integrated Containers (VIC)'. Keeping in mind that VICs sole purpose in life is to enable Containers in vSphere with a cool concept of a 1:1 alignment of VM to Container. This meant a container can have all the flexibility inherent in a container with all the control and security of a VM. Alright come on down Photon Platform just bigger, better more focused right? Well not really....

So one thing that strikes you with the Photon Platform is the flexibility of choices it provides. From a scheduling cluster manager's perspective you can utilise Swarm, Kubernetes or Mesos out of the box. On the other side, this is ESXi after all (marketing slides aside that keep alluding to the slimmed down hypervisor) so it will happily run Virtual Machines  and Containers side by side (well containers within VMs anyway).

Being able to support both is important as not all workloads are created equal in this new age. Let's be honest, not everyone working in this new 'Cloud Native' world has grown a beard, rides a bicycle and wears skinny jeans, there is a variety of personalities that need to be catered for. What is common is we don't all need the assurance and crutches that come from the more traditional platforms. If I fall over it is ok, I will just get myself back up. In the traditional world I would have crutches, cushions, people watching me, ready to catch me if a stumble, etc. This is because in this traditional world I am seen as needed to be treated like a VIP (i.e. Very Important Process) and require all the care of advanced services to keep me going (and the costs associated with those advanced services). Sometimes a more traditional approach may be preferable to containerising (is that a word?) the workload so Virtual Machines live on even if they are seen as cattle (I'll let Duncan Epping define that for you).



This mixed capability is emphasised by the fact that all the management VMs are hosted out of the hypervisors that are flagged as Management hosts (you designate if a host is 'Management' so participates in the control plane or 'Cloud' which only provide a hosting platform capacity, or both). Anyway myself for one, I'm a big fan as by including the ability to host containers and VMs as this provides that flexible, cost and scale focused platform I want without having to look at alternatives such as Openstack. Add to this the fact that it also includes support for the 3 main container and workload cluster scheduling services out of the box and your covering a lot of bases for possible consumers of Photon Platform.

My personal choice is to go forward with Mesos at this stage as it gives me the open choices I want when paired with Marathon. I have played with all three though on Photon as all it takes is that you import the 3 images and enabled the 3 cluster types. I am not here to tell you how to do this or how to create a cluster as there are plenty of blogs that do that, I just want to give my view and some lessons I learnt on the way that may help someone out there. 

It is important to note that when creating clusters, they align to there traditional model and not some Hybrid like in VIC. What I mean by that is that if you create a Docker Swarm cluster, the slave nodes will be running the containers in a shared multi-node format. The nodes are utilising PhotonOS as there Operating System but we don't have the 1:1 Container to VM alignment of VIC. This is important to consider as your sizing of those nodes should reflect your needs. There are default 'Flavors' (remember Flavors define the resource configuration profiles) aligned to the different clusters but they can be overwritten at the API, CLI level when creating a cluster.

This system is built as a multi-tenant service from the ground up so when you create workloads they need to be placed within a Project assigned to a Tenant. As you would expect, a Tenant gets an allotment of Resources (vCPUS, memory, Disk Capacity, VMs, etc) which are then subdivided into Resource Tickets (think Gold , Silver Bronze classes or whatever). A Project is then assigned a Resource Ticket which it then further carves out a resource reservation /limit out of. All the way down this logical structure you can add Access Controls via the integration with VMware Lightwave



So importantly as a tenant, I can create my own cluster service to host workloads or just create VMs directly based on virtual appliances or virtual disks imported into the system.  I can then control my VM workloads via the CLI and API into Photon Platforms Controller VM as you would of normally via vCenter, or use the cluster manager of choice to control the workloads within them.

As a final statement on the installation of the clusters what I can say is this. I have built all 3 flavours using simple sandbox methods as well as going through the full manual processes (more then once as I like punishment) and there is a highly compelling value proposition here. To be able to create these services as you require them  on a production class platform so simply is huge. There will be challenges with the way it is done now such as how do they stay close to release parity (hence the compelling part of it being Open Source), but this is a new era platform that brings together the traditional on-premise controls to the new cloud native era's service requirements in the one stack. This is great stuff for a version 0.8 release!

Some Notes On Cluster Enablement:


Be Wary Of Resource Stinginess
My nature is to give less then more when allocating resources. This sent my on a dance to the requirements of the various clusters as I did not have enough resources aligned to my Project.  Just be aware of the sizing needs (they vary of course depending on your own configuration requirements) and if you are resource constrained, use custom 'Flavors'. I prefer to bang my head on the wall so got very use to the 'Photon Cluster Create' operation being closely followed by the 'Photon Cluster Delete' operation to delete the failure and started again (side note, it didn't like me trying to recreate deleted clusters with the same name, didn't ping this down but got into the habit of using unique names each attempt) :)

Cluster Creation IP Address Requirements
Within the instructions it is emphasised that DHCP needs to be disabled / not present in the management network which is where the clusters are installed. Ironically, if DHCP is disabled the cluster deployment fails with a 'Unable to Obtain an IP Address' error. To progress forward I re-enabled DHCP in the management network (the one that the Photon Controller is installed into). The more accurate requirement description is to ensure the static IP addresses you provide (i.e. for the 'zookeeper' servers in Mesos and 'etcd' servers in Kubernetes and Swarm) are not also served within the active DHCP scopes.

Those Damn Certificates
Maybe you lucky that your company provides an open network and all traffic is created equal. That is not the case in mine, we trust no-one and inject ourselves into every certificate coming into the network. Any problems with this is mitigated by adding our own certificates into the trusted root but this is difficult for those environments that are created dynamically as these clusters are.

When you create a cluster PhotonOS based VMs are created corresponding to your configurations requirements (i.e. how many slaves, how many provide the interconnect service) and are then configured by templates contained on the Controller. Where the certificates being trusted becomes critical is the fact that the services run as containers within the VMs and as such the images need to be downloaded as part of the initial execution. The end result is the cluster creation operations fail with 'Time Exceeded' errors.

Anyway its an easy fix! I edited the template files contained in the controller directory '/usr/lib/esxcloud/deployer/scripts/clusters' with descriptive names such as 'kubernetes-master-user-data-template' to inject our certificates into the VMs. This at least made that problem go away and you of course could do this to apply any other customisations that may be required. It would be nice to see some option in the future to inject certificates into the process. I also provided this same feedback for 'vSphere Integrated Containers' as had the same problem there!




Tuesday, 3 June 2014

It's Not Just About Backup!

Why Have Different Levels of Protection?

I like the saying that comes when people refer to an old classic car 'they don't make them like that anymore'. The thing is, the statement should be followed with the statement 'thank goodness'. Can you imagine putting up with that level of reliability, quality, safety, comfort level with a car you purchased from a dealer today that you would have got in years gone by?

Same goes with technology. I still wake-up some nights and shudder when remembering the old tape backup routine. This use to go in two ways, the first was the nightly scheduled tape shuffle which thankfully I was not tasked with. The other was the pre-rollout backup we would run before pushing any new service out. This use to comprise us kicking the job off, then trotting down the road to a golf driving range to knock a few buckets of balls into no mans land to kill the time (usually hours).

So looking at that, we had two different use cases but only one method at our disposal, the inglorious, forever unreliable tape backup. Fast forward 15 years (maybe more but I keep that quiet) and we have many different forms of protection at our disposal including storage replication, snapshotting, tape backup (but hopefully VTL, not those horrible tapes).

All these options serve a purpose and work together nicely. If we were to classify the different use cases into buckets they could be:

Business Continuity (DR) = Remote Replication, constant protection driven by RPO and RTO requirements

Operational Protection = Local Snapshotting (or continuous protection as is provided with RecoverPoint) typically done by set interval (storage level) or self service (VM level)

Long Term Retention = Traditional Backup style typically run once a day

Each of these has there purpose and specific business requirements that drive their implementation. Interestingly I see a lot of the BCM and Long Term Retention being positioned but little of the Operational Protection being catered for. Back to my scenario at the start, if I could of had snapshotting at my disposal (or if I was real lucky, EMC RecoverPoint Continuous Data Protection) my golf practice (aka time wasting) would not have happened as we could have snapshotted the services that we were updating and started the rollout straight away.

Self service snapshotting is easily available these days thanks to two things:


  1. Virtualisation with the VM level snapshotting (Checkpointing in a HyperV world)
  2. Automation tools for storage (EMCs Storage Integration Suite is a good example)


So that is all cool for those known events but what about those unknown such as data corruption. That is what scheduled snapshots protect you against, providing a much more granular way to protect your systems beyond the typical 24 hour cycle of Long Term Retention. You may only retain a short set of snapshots such as 1-7 days but they can provide good peace of mind for those services deemed worthy.

I do realise that replication can also provide a way of rolling back but typically it is 'to the last change' or committed IO operation, so a corruption could easily by on the remote site as well as the source. Also replication would require that traffic to come back down the wire which adds time to the recovery / rollback process.

Another benefit of Operational Protection is that it can provide an easy / quick way for copies of  datasets such as those within a database to be presented to an alternate location such as a from production to a test/dev instance.

Anyway, I got that of my chest so I feel better. Operation Protection = Good!

Just as a last note on this, I did not include High Availability (HA) in this as I am more looking at where the old functions we used tape for have evolved. There is some real cool stuff that can be done with stretched high availability that spans physical locations as is supported with vSphere Metro Storage Clusters and Microsofts Failover Clustering with products such as EMCs VPLEX, but that is a big enough topic on its own.