Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Wednesday, March 8, 2017

Cloud Container Builder

In my previous article I mentioned that for container images in production, you might want to do a 2-step build process for having a smaller image size and improved startup times. It’s also beneficial to host it within the same provider to prevent additional dependencies of 3rd parties when pulling the image layers.
In this article, I will give an introduction to Google Container Builder which satisfies these requirements.

Google Cloud Container Builder

To access the service from your terminal using gcloud (which is part of the Cloud SDK), you first need to enable the API. Find the Google Cloud Container Builder API in the API Manager and click enable. Also don’t forget to authenticate with gcloud auth login.

Specifying the build steps

Creating a specification on what to build can be done in multiple ways, in this example we use a yaml file in our project. See the example cloudbuild.yaml that I’ve created for the apn repository.
The example configuration only has two steps, both which runs pre-built images provided by Google. The first step builds our binary using Go, where the second runs docker to build our final image. Our Dockerfile specification uses the COPY command to add the binary created from the first step.
If you have cloned the example repository, it’s possible to manually submit this build configuration by running:
gcloud container builds submit . --config cloudbuild.yaml
This will start the build process as soon as possible and tag the image in your container registry.


Build triggers

Having a way to submit a cloudbuild configuration and see its progress live in the terminal is great, but you can also define triggers that will automatically run the build process when there are new commits.
These requires additional authentication and can be configured in Build triggers on the console.


Last words

Being able to provide any image to run as a build step gives you a lot of flexibility to create your own type of pipelines. You could potentially run tests and automatically deploy upon success, essentially create your own lightweight CI/CD system.
When switching the build process, the image size for the example project resulted in 6MB compared to 347MB.

Saturday, December 17, 2016

The Battle Of The Clouds

They come in many flavors, all which claim to be a “cloud” provider. But really, what is the cloud? Is it to host everything on a Linux server like yesterday, but virtual? What is a virtual server? What benefits does it give me to put the world virtual in front? That depends.

Virtual Private Server, VPS

This is what is easiest to grasp when coming from hosting and managing your own server. It has a configured harddrive, some amount of RAM and a CPU. You usually manage it the same way as you would have done with a traditional dedicated server, the difference is that the cloud provider can squeeze more customers into the same hardware. This makes it possible to get a cheap VPS, which has the same capabilities as a dedicated server.
The features after that really depends on what provider it is. Some might let you add and delete virtual hard drives or change the amount of RAM and CPU allocated within the actual hardware it is running on. Some use it as a way to hide the fact that you’re really not getting all that CPU but share it with other customers behind the scenes.

Infrastructure as a service, IaaS

Now it gets a little bit more interesting. Here we’re not only virtualizing the server but we’re also doing virtual networks and interfaces, adding metadata services to do introspection about ourselves and the environment we run in, creating groups of multiple servers (now called instances) that often collaborate and scale together in a dynamic fashion, often with a load balancer in front. This is also accompanied by APIs that can be used to control all this, making for a great opportunity to automate. But similar to using a single VPS; we’re still focusing on building a network of computers, the infrastructure. Not on what will run on them.
A great example of IaaS is Google Compute Engine

Containers as a service, CaaS

As a fairly new concept, CaaS falls somewhere in between IaaS and PaaS (see below). Here we’re starting to abstract away the details of the infrastructure. We’re now thinking more in terms of services and application configuration, neatly packaged into containers. We think less of any particular server instance that might currently be running it, instances come and go (sometimes whole groups of them) come and go in any given moment, while services stay.
See Google Container Engine for more information about automated container management.

Platform as a service, PaaS

Here we have almost stopped caring about what runs our service, we can spend all our attention on the core parts of our business logic we’re trying to develop. All details of distributed logging, HTTPS load balancing, auto scaling depending on load etc is already solved for us.
This level of abstraction is great for getting a scalable product out the window quickly and to not have to worry about the operational tasks. It usually have a steeper price tag than the lower levels explained above, but we also have to take into account all the time that is saved. It’s not uncommon to have to put At least 50% of the time into operations tasks when managing the full stack by yourself.
You can try App Engine for this type of platform.

Software as a service, SaaS

This includes fully managed products such as monitoring and debugging dashboards, authentication solutions, invoicing services and analytics. We can benefit from these services instead of having to host and develop all of it from scratch or hosting it ourselves. We can keep to our core business domain instead, it’s the ultimate time saver but it also brings the highest cost in form of usage fees and subscriptions.
See for example Stackdriver on how to outsource the monitoring of your services.

Conclusion

There are many levels of abstraction to choose from, and many more abbreviations than explained here. What a cloud provider offers can vary greatly, so can the implementation details behind them. But in general when selecting what solution to build on, take into account what stage the development is in, the complexity of the problem and experience of the developers. Try to focus on the problem at hand and benefit from existing solutions as much as possible.
As an example, I like to start with building on App Engine, moving parts to Container Engine when more flexibility is required and finally leverage the ultimate flexibilty from Compute Engine for any remaining requirements or optimization tasks.

I believe a common misconception when moving to the cloud is to try and solve the same problems of yesterday in the new environment, instead of making it cloud aware. Take the traditional Wordpress service as an example, it’s easy to start thinking about a need to share the disk volume between many instances. Or that PHP sessions requires a sticky route on the load balancer. This type of setup is not built with a dynamic cloud or distributed service in mind.

The state does not necessarily have to be shared directly by using the disk volume. We could use a separate database (possibly managed by the provider) for saving blog entries, object storage for uploaded media and memcache for sessions. Now we’re only left with the rendering of the pages or the “Wordpress API”, which is much easier to scale and does not rely on its local disk for sharing state or the complexity of synchronizing this volume between multiple instances. We could take it even further and move the sessions and rendering up to the client, only fetching the actual content from an exposed API. But that’s a different blog post.

You can try all of these different solutions for free with the USD $300 trial at Google Cloud, happy hacking!