Showing posts with label docker-machine. Show all posts
Showing posts with label docker-machine. Show all posts

Friday, 25 March 2016

Unbox, Power On, Watch the RackHD Magic Begin!

What am I saying, you are probably asking? I have had great fun over the last week getting down and dirty with one of the headline EMC {code} projects, RackHD. This is a very cool solution for taking care of your low level activities for bare metal infrastructure. Think configuration management for all those physical pieces in a rack in the data center. This is a way to manage the firmware through to the personas of these devices, very good stuff and it is Open Source!

I guess first up I should give some context around what I am referring to with EMC {code}. At EMC we can be perceived to be a player in the more traditional areas of IT Infrastructure, EMC {code} is one example (of many) where this is a long way from the current day EMC that I work for. EMC Code is the landing place for developer enablement and open source projects for EMC.

This is your one stop shop to find any open source projects supported by EMC, community projects by EMC staff, partners and customers. It also includes training content and projects aligned to helping enable the developer community on wide spectrum of those new Mode 2, Platform 3 Cloud Native, SDDC technologies (you get the idea). I strongly encourage anyone to have a look and join in. There are some very smart people in this community and very accessible via tools such as Git and Slack.


Anyway back to RackHD, what is it really. So, we know that there is a number of configuration management solutions out there today such as Puppet, Ansible, Salt, Chef, etc but these tend to have a common trait, they look after the nodes/hosts/clients through remote agentless access (SSH, WBEM, etc) or via agents that are installed in the target device. What this of course requires is that the device is ready to accept remote requests or have agents installed. in other words they take control and configuration management of the platform once it is operational. A few such as Puppet with Razor have the ability to control the physical world but not as an all inclusive service with mulitple action workflow smarts.



Looking at RackHD you have a solution that provides:


  • Bare metal configuration management across the physical infrastructure stack. So not just with the compute but all the bits that go in a rack (hence the name) including:
    • The compute
    • The Network
    • The Storage
    • The enclosures that may contain the nodes
    • The Racks themselves and PDUs (remember, just plug the thing in)
  • A strong but intuitive Restful API
  • Aligns to the 'Infrastructure as Code' model. Allows node definition, associated workflows and SKUs to be fed in as JSON files via the API (or UI if thats your preference)
  • Fully self contained service providing all the mechanisms required to control a physical environment such as DHCP, PXE,TFTP, HTTP, etc
  • A scale-out architecture that can grow to your environment needs
  • Full support of dynamic discovery and physical control through interactions with hardware via physical interaction standards like IPMI, SNMP, BMC, DMI
  • Ongoing low level configuration tracking and management through Pollers
  • Provides that one stop shop for physical telemetry data and alerts
This stuff is cool and to watch something be discovered and then have a profile assigned and provisioning actions kicked off is very cool. It does not matter if is using Zerotouch for Network switches or building out a Docker Cluster via Kickstart scripts, Ansible modules and Docker-Machine (did I mention that there is a Docker-Machine driver), it is great to watch.

The process of discovery and workflow execution

The best way to learn about tools like this is to start playing with it and luckily the EMC {code} have mode that very easy for all of us with a fully functional Vagrant Demo setup that leverages VirtualBox off your laptop. I highly recommend anyone to give this a go as I definitely have had some fun with it. The guys have also written a docker-machine driver that can also be tested with RackHD within Vagrant, get it from GitHub now in under an hour you would have your first workload up and going! 

If you want to see this in action, Kendrick Coleman did a great demo video on YouTube.




http://bit.ly/rackhd-docker

Sunday, 21 February 2016

Docker-machine and VMware vSphere Clusters, Lessons Learned

I have been doing a bit of work with various flavours of containers lately which comes with the highs and lows of being a technologist. One thing that was interesting is that with the recent updates to docker-machine, my setup for binding and deploying to our vSphere environment no longer worked.

This issue presented itself when attempting to provision a new instance with the following error:

Running pre-create checks...

Error with pre-create check: "default host resolves to multiple instances, please specify"

Digging deeper and I found out that where I was using the cli argument / variable 'VSPHERE_COMPUTE_IP' to specify a ESXi host to bind to, you know need to use 'VSPHERE_HOSTSYSTEM' (or as a cli argument '--vmwarevsphere_hostsystem'). As this is new documentation is fairly light as are real examples. What the documentation does provide though is two syntax examples as follows:

for cluster -                      VSPHERE_HOSTSYSTEM=<Cluster Name>/
for stand-alone host         VSPHERE_HOSTSYSTEM=<Cluster Name>/*

Using those two syntax examples in the lab I first attempted 'VSPHERE_HOSTSYSTEM="UCS8#General Use/' but this resulted in the same error message. It was then also notable that examples were pushing to single node clusters, the test cluster I was targeting has 4 nodes.

Anyway to cut straight to it, the documentation is a little incomplete. You do need to still target a specific host in the cluster and docker-machine will not do that for you. As such in a cluster with multiple hosts the actual syntax is:

VSPHERE_HOSTSYSTEM=<Cluster Name>/<Host Name>

 so for my setup it looks like this

VSPHERE_HOSTSYSTEM="UCS8#General Use/esxrack01.vce.asc" 

I guess while I am here another hint does not hurt. The other piece of lovely syntax requirements is the optional variable for targeting a specific ResourcePool, to do this you use the variable VSPHERE_POOL. The syntax for this is

VSPHERE_POOL="/<DataCentre>/host/<Cluster>/Resources/<Resource Pool>/..."

So again in our lab setup the variable looks like this:

VSPHERE_POOL="/ASC/host/UCS8#General Use/Resources/Containers" 

Docker-Machine and vSphere together are a great team and well worth a play so hope this helps. 

I am also fortunate with doing a bit of work with VMware's vSphere Integrated Containers (VIC) which takes these issues away and provides a very strong alternative to other scheduling/clustering services such as DOCKER SWARM within your vSphere environment but ore on that in another post later.