First, some overview ramblings:
Ok, first of all let me say this, I am excited about the changing landscape in the industry. This shift to containerisation is exciting to me in two ways. Firstly, as an automation focused engineer, this just seems like the natural progression / evolution of the current platform (incoming alert, Unikernels). Secondly, the technology landscape is very exciting, lots of new tools (or is that toys) to play with!
That said I have been fortunate to be working with VMware (shout out to Roman Tarnavski @romant) for a while now on their rapidly evolving 'Cloud Native' program. With this a couple of recent announcements have there two primary offerings 'vSphere Integrated Containers' and 'Photon Platform' are both now Opensource on github so now everyone can have a go. A key point of differentiation between the two is that VIC leverages your existing VMware vSphere platform to host container centric workloads on a 1:1 basis on a Virtual Machine, the other, Photon Controller still leverages ESXi (today anyway) but has it's own control plane.
Photon Controller is a replacement to vCenter focused on the operational requirements of the container workloads, not virtual machines. Why you may ask? An easy way to think about it is the rate of change and scale that us usually associated with a container aligned environment versus the long term service life and different scale models associated with traditional Virtual Machines drives a different model.
If you are all in for containers, or alternatively, have a substantial container footprint today (or will soon) you have some considerations when assessing VIC versus Photon Platform:
- Will you be limited by the sizing maximums of vSphere. Think vCenter, is 10,000 VMs enough when 1 VM equals 1 container how about the change rate of those VMs.
- You also may be questioning the commercials around vSphere in a container world, do you really need those advanced availability services (e.g. HA/DRS) in a container world. I would say no, the idea of containers is pretty simple, let them scale out and have availability controls north of the service such as with NLB (I know it is an over simplification) or south with stateful data services (should be scale out themselves but that is for another day).
Neither of these statements cover the why VMware anyway? Isn't this new world order a shift away from the vendors we typically associated with infrastructure, doesn't the consumer in this new world not care about infrastructure? All I would say there is that there is a world of difference between developing in a sandbox environment and having your workload run in a production class platform with all the controls, monitoring and reporting thats in place within organisations today for traditional workloads. I also do a lot of testing in the Cloud and directly on my laptop, tat is the beauty of containers, you can just shift around. But for it to go production, there are certain requirements that a lot of organisations need to comply to. This may be due to regulatory demands, data sovereignty requirements, cost controls, to name a few.
The idea of either offering is to be able to provide the application developers, operator, <insert whoever else here> the same user experience, with the same toolsets but aimed at a different platform. This platform then is able to have the same operators (VMware Admins) look after it in the same / similar way that they do there vSphere environment. Everyone wins :)
Anyway, next I will look at the install and configuration stage. There is some great content at github and by other bloggers such as William Lam. But I am finding some differences with my own experiences to what others have documented.
Some quick notes:
I kept getting file upload errors when the installer tried to upload the Photon VIB to the Management ESX host (see these issues in the log /var/log/esxcloud/deployer/deployer.log) . I still have not isolated where the issue is and not been able to isolate it as yet in the source code. What I did do though is simplified my naming convections and the issue went away. I noticed all the blog samples where pushing into default bare ESX hosts where I aligned mine to my own pseudo naming standards. In the end I simplified my environment by taking hyphens out of my port group and datastore naming and was now able to install.
Uploading Images getting HTTP 413 Error
To enable management clusters (schedulers) you need to upload the corresponding dish images into the Controller (Mesos, Kubernetes or Swarm). I found that regardless of the image I was getting a NGINX error of 413 'Request Entity To Large'. You don't have to be a google ninja to identify the error quickly with the reported resolution to set the 'client_max_body_size' value in the 'nginx.conf' file.On the controller VM the NGINX service runs as docker container called 'ManagementUi' so it was easy to then pull the configuration file down with a 'docker cp' command to have a look, low and behold the setting is not there!
OK, now I have just a rat down he drainpipe I can say that I was on the wrong track. NGINX looks after the User Interface but not the API. Looking at the docker containers there is one running HAPROXY which is the loadbalancer service, called you guessed it 'LoadBalancer'. Jumping into this VM and looking att he file 'haproxy.cfg' you can see that the API is served by ports 28080 and 9000. By switching the photon cli target to either these ports let the images upload successfully (e.g. photon target set http://10.63.251.150:9000).
I took the long way but got there in the end, time to start playing!!!
To enable management clusters (schedulers) you need to upload the corresponding dish images into the Controller (Mesos, Kubernetes or Swarm). I found that regardless of the image I was getting a NGINX error of 413 'Request Entity To Large'. You don't have to be a google ninja to identify the error quickly with the reported resolution to set the 'client_max_body_size' value in the 'nginx.conf' file.On the controller VM the NGINX service runs as docker container called 'ManagementUi' so it was easy to then pull the configuration file down with a 'docker cp' command to have a look, low and behold the setting is not there!
OK, now I have just a rat down he drainpipe I can say that I was on the wrong track. NGINX looks after the User Interface but not the API. Looking at the docker containers there is one running HAPROXY which is the loadbalancer service, called you guessed it 'LoadBalancer'. Jumping into this VM and looking att he file 'haproxy.cfg' you can see that the API is served by ports 28080 and 9000. By switching the photon cli target to either these ports let the images upload successfully (e.g. photon target set http://10.63.251.150:9000).
I took the long way but got there in the end, time to start playing!!!
No comments:
Post a Comment