So, registry all up and running on Linux, information being passed in and out and everything looking very good. One small issue, that computer isn't always on, that computer isn't directly attached to the internet in anyway, I don't spend that much time on that actual computer. Now, if your wanting to use something to save you time, effort and bandwidth, I think all three of those would mark that option as a no go.
So, I decided to starting building the same on my mac. As mac users know, docker isnt quite as easy. Originally, you would start docker, using boot2docker and a small virtual machine instance would be spun up and docker would talk and build on that. As with the Linux version there have been some dramatic changes very recently, the first of which, a tool called kitematic. this is a GUI that would kick off boot2docker and allow you to select an image from the gui, search the hub or attach to your own repos. It is very nice and certainly a great way to get into standing things up quickly in a docker container. Its tagged as beta, there are a few issues I have come accross around the reconnection to the boot2docker instance or killing it off, other than that its very polished and I would recommend it to most. Having said that, its just not for me. I spend so much time in the terminal and anywhere production or development wise I am going to run all my docker directly from the command line and this feels very unnatural and uncomfortable. I might get used to it but I dont really want to. I think that might have been the feed back from many people as even more recently they have released Docker Quick terminal Login. So far I have only managed to get it working correctly twice, so not really something that can be relied upon.
So, once again lots broken and a balance to be had between trying to fix things or working around. Looking at this as an opportunity, another option was chosen. There are a lot of new "container" type operating system that have been created and released fairly recently. These include things such as Ubuntu Snappy, RancherOS, CoreOS and Atomic. Personal belief is that CoreOS is likely to have the most benefit for my future, so thats what I decided to focus on. A look back later at some of the other mentioned options will be a good comparison of each pros and cons.
Installing CoreOS in virtualbox. Pretty easy to do, download the iso and start up the virtual machine. It should auto log you in. Dont get too excited, its not installed just yet you just have a live version of CoreOS running.
The instructions to install can be found here but basically follow the single line command of
coreos-install -d /dev/sda -C stable -c ~/cloud-config.yaml
where the cloud-config.yaml contains, at the very least
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
Follow the link above for all the other commands that you can add into that file.
That got me up and running, docker running as well. Now onto the next problem
Published at: 2015-10-25 22:10:15
And we are finally, after a little detour due to time and energy, we are back to blogging about technical things. As the title has already said this is part 1 about docker but it's not going to stop there
I read a blog recently, while pondering my next steps in working life, and not trying to get swamped by all of the new things that are on my list of things to learn, it's all ab out the place you want to be and all of the mini wins in between. So, docker is just the start of phase 1, phase 1 being me finally building my library application that I have been trying to do for years
Why would I need docker then? Well just small steps in learning. I want to run the database that I will need in a container. To have quick access despite being at the end of a s low internet connection or being on the train. To make that access possible I decided that I needed a registry. This is something I have layed with previously, it wasn't the best e xperience I have ever had and I even contributed a few patches back but it was never really reliable and didn't really work in the way that was expected, so we abandoned it. Shortl y after Docker got a world of financing and the registry was one thing that got a lot of love, and by a lot, I mean a shit ton. New api version and lots of new features around secu rity
I jump onto my Ubuntu desktop and pull the latest registry, get it up and running with a basic configuration. Happy with that I set it up to off load the storage onto the local file system, that started up fine and good with a basic query. Thats where things started going a bit astray...
I was noting all steps I was going through, thinking what a great kick off it would be for the whole of this series of posts but it was getting more and more deeper and not gett ing any closer to finding a resolution. So, final run a delete and manual install of docker. Boom everything worked fine...
So what was the issue? Simple the version. Once again docker has changed the name of its self as for as the core of docker goes. Despite checking many times that docker was upda ted, it turns out that docker is no longer docker but docker-engine (that's now the third incarnation :)).So the moral of the story is, yes the new registry image works really well, certainly none of the issues I have previously had issues with. More importantly docker is still very ve ry fast moving, if you duck a way for a few months, double check everything is still right, possibly even consider clearing out and reinstalling as if you had never used it previou sly
After all that and getting it up and running, for the amount of time that I get to spend on my desktop, there was the realisation it might be pretty pointless, so I moved to the macbook, something else that has just had a lot of new updates... so they fix one thing, move back in another area. More on that in part 2!!
Published at: 2015-10-24 21:15:09
I did say that I would avoid work books, I should clarify, I am going to directly avoid computer books. While that last statement isn't true, I'm just not writing about them...., this book might be work related but there is no reason that everything that I gained from it can't be used on a daily basis.
So why this? Drive was very interesting, looking at how teams and employees can be motived, should be motived, so that they will have a happy and productive job. This is something personal, help me to try and improve myself to help deliver a happy and productive team.
This, as the title sort of gives away, is a combination of what HBR consider the 10 best articles that they have published on the subject of emotional intelligence. What is emotional intelligence? The knowing and understand your own emotions and then how that translates into managing relationships in an empathetic manner.
This was a great choice as a follow on from Drive. Drive being focused how how great teams need to function and how to deliver good work yourself, this focuses on how you manage your soft skills to get the best from those around you. It comes from many different angles things that appear to work, how to get more information from others as to how you affect people and through to when you know where your weak points are and then how to make sure you put measures in place to help you improve them.
At a crossroads in where I want to go it was a great read for me. It showed where I might get assistance to confirm what I believe my weak points to be and then a few articles later, how to make sure I put them in place. The articles in between confirming how emotional intelligence affects teams and companies and how to get the best from it.
A very enlightening read and if you are in any way leading a team well worth the read.
Published at: 2015-10-15 13:20:38
A very interesting day at Container Camp London with lots of very interesting talks and a few interesting conversations had with vendors during the breaks.
As I sit back and reflect on what was covered and what kept reoccurring, a few things immediately stand out.
There is start to be the realisation that its nuts spinning up a VM to then run containers on that
There are lots and lots of tools out there all with different ideas all with overlaps and no one is really sure if they are doing the right thing
LXD should be given more consideration (that's possibly more just my opinion)
Talks came from both large well known companies, think Google, Docker, Joyent, and smaller very container specific companies. Each talk was around 30 minutes long and covered a wide variety of topics, something I think that ties in with the huge amount of choice that is out there to do all the different things that you can do.
Opening talk was given by Bryan Cantrill of Vmware at lunch. Particularly memorable as they spun up a container running DOS 6 and started doom. All very cool and a great way to show off to a bunch of geeks.
A few of the talks were given over to management, all of them different, all with core focus on slightly different layers but all with duplication and overlap of each other and other tool sets that are used around containerisation. Rancher, a SaaS service that allows you to see, build and deploy across all the big cloud providers as well as custom locations. Works with fleet, kubernetes and mesos, looks really interesting. Their talk was given by Shannon Williams, VP of Sales and marketing but you might know him more from being the co-founder of Cloud.com that was sold to Citrix.
Google with kubernetes was demoed by Mandy Waite. A very interesting look at how Google thinks of containers and also, really the size that the developers work with, when your example starts with 5 and most people nod and then gets flipped to 10000 because, well that's what the normally need, it really gives an idea of the scale to which they need to go and why, even thought they were very much at the cutting edge of containers when the rest of us started to think about them, they got really excited at all the new found love. Later on Alissa Bonas of Radhat showed off a tool called ManageIQ. As I have said there is a lot of over lap with tools and there are certainly areas that this tool and Rancher both overlap but this had different functionality that was very interesting, such as giving information about the underlying machine that containers were running on. Something I can see being very useful if you have random container issues but you can tie them all back to a single VM. What is amazing about all of the tools that were demoed, they are all opensource, they are all looking for people to help with their development and yet they are all so polished. You might expect that from a Google core product but both Rancher and ManageIQ are very very slick looking products and well worth investigating.
There was an excellent talk given by Arjan Schaaf on container performance in the cloud. Arjan had dedicated a large amount of time comparing both Azure and AWS, trying to match machine sizes, for network bandwidth and then comparing the tools that make containers sing (Weave, Project Calico, Flannel UDP and Flannel VXLAN) and how well they then performed in bandwidth, latency and CPU. Some very interesting results, some of which can be found on his blog.
We then moved onto Miek Gieben and a talk around DNS (well he is one of the people that brought us the speed in 188.8.131.52) and a few other things that they do at Improbable.io. One of the cool things that came from his talk was the mention of a tool called Dinit, allowing you to run and control multiple processes within one container, yep, your not supposed to but it is possible....
Two very interesting talks around security, from two different sides. Ben Hall on his companies scrapbook project and what giving people free access to a container can do and then Diogo Mónica on tools that docker are providing so you can be sure that the container you are running is what you expect. Scrapbook gives root access to a container so certainly level of mischief were expected from a percentage of people, usual things such as deleting stuff, poking around to see what was running and so on. This talk was more around some of the bigger issues that were found. By default every container running on the same system can be found in the /etc/hosts file (to disable this start the containers with --icc=false). You can kill off the host if you run shutdown in the container with the --net=host command. No CPU restrictions and docker logs can grow very quickly, how simple a way to create a service denial attack... Kill all the CPU or fill the disk with logs (saying that I hope everyone runs their logs in a separate partition?), Bandwidth cant be restricted. Useful tools include docker diff, lets see what they have done to the container, Sysdig and great tool, also spoken about, that's really useful on monitoring what's happening within a container.
The second talk around container security by Diogo, who works for Docker, was how the Docker team have taken the TUF (The Update Framework) developed for the TOR network and created a tool called Notary. While they are using it to sign containers it also has the potential to be used for securely signing any type of package, possibly anything at all. Working with key at multiple layers, including an off-line key, it provides multiple layers of signing protection. From Notary they have implemented Docker Container Trust. More on this can be found here. For now its disabled by default but they are hoping to make it default very soon. From the demo, it certainly looks like something that is worth investing the time in now and should, hopefully, help remove some of the concerns around is that image from where they say it is. One thing to remember it will not protect you from what is running in the container, so if you run a container that is called I_will_own_your_network and it says it from dirtyhacker with Container Trust, you can be sure that its dirtyhacker that owns your network...
Then there was this or you can obviously get sysdig cloud if you want to pay and have really pretty dash boards.
So a very interesting day with lots of learning and I am very glad that I was allowed to go.
Published at: 2015-09-15 13:38:52
"The surprising truth about what motivates us", A pretty impressive tagline and certainly something that this book delivers on.
Daniel H. Pink is a man who is very much at the forefront of modern psychology around peoples thinking, motivations and what is needed to make people happy in the work place.
This is a book that I had been thinking of getting for a few months. After attending a tech day with my current company and it being recommended as something to read from one of the talks I decided I needed to get it then and there. I would say that it has certainly been my best work development purchase of the year and I have bought a huge amount of learning materials so far this year.
The book is set out in three very specific sections, a new operating system, the three elements and type I toolkit.
So the first section goes into the history of how companies have come to their methods of motivation and why that might not be the best way for everyone these days. It covers some of the test, studies and discoveries that have happened over the years and how jobs and types of work have changed yet many companies still believe in the same methods of motivation and reward for getting the best out of people
The second section, The Three Elements, is all about what is needed in many modern roles to make sure that employees are happy productive and delivering good work on time. It is a surprising shift from what was and still is the best way of motivating so of the more manual, repetitive jobs of the more industrial work.
The final sections cover tools and tricks that are mentioned throughout the book that can be used to create the Type I person that the book is aiming for. Most of these are quite interesting although they do need some time a dedication to make sure you work through them. Its something that I am working on but already, a few minor changes and I can see how things can be improved.
Without doubt this book gets an instant 5 out of 5. Some of that rating is because it fits the working environment that I work in, it might not be for every one but if you're doing anything above a menial tasks, there should certainly be something useful. Unlike most "Self" help books, this one is certainly focused on a lot more science than most, so even if you just have a passing interest, grab a copy.
Published at: 2015-06-01 21:01:29