Wednesday, March 8, 2017

Cloud Container Builder

In my previous article I mentioned that for container images in production, you might want to do a 2-step build process for having a smaller image size and improved startup times. It’s also beneficial to host it within the same provider to prevent additional dependencies of 3rd parties when pulling the image layers.
In this article, I will give an introduction to Google Container Builder which satisfies these requirements.

Google Cloud Container Builder

To access the service from your terminal using gcloud (which is part of the Cloud SDK), you first need to enable the API. Find the Google Cloud Container Builder API in the API Manager and click enable. Also don’t forget to authenticate with gcloud auth login.

Specifying the build steps

Creating a specification on what to build can be done in multiple ways, in this example we use a yaml file in our project. See the example cloudbuild.yaml that I’ve created for the apn repository.
The example configuration only has two steps, both which runs pre-built images provided by Google. The first step builds our binary using Go, where the second runs docker to build our final image. Our Dockerfile specification uses the COPY command to add the binary created from the first step.
If you have cloned the example repository, it’s possible to manually submit this build configuration by running:
gcloud container builds submit . --config cloudbuild.yaml
This will start the build process as soon as possible and tag the image in your container registry.


Build triggers

Having a way to submit a cloudbuild configuration and see its progress live in the terminal is great, but you can also define triggers that will automatically run the build process when there are new commits.
These requires additional authentication and can be configured in Build triggers on the console.


Last words

Being able to provide any image to run as a build step gives you a lot of flexibility to create your own type of pipelines. You could potentially run tests and automatically deploy upon success, essentially create your own lightweight CI/CD system.
When switching the build process, the image size for the example project resulted in 6MB compared to 347MB.

Tuesday, February 21, 2017

Apple Push Notifications with Kubernetes and Pub/Sub

This is how I implemented server side Apple Push Notifications by writing a service in Go and deploying it on Container Engine (Kubernetes) with Cloud Pub/Sub as the worker queue.
The architecture can be used to scale production work loads for sending notifications but also as a guide on how to distribute similar work loads in general.

Apple Push Notifications

Apple provides a modern and simple API for sending notifications to registered device tokens. How to generate these tokens (among other things) can be found in the Remote Notification Programming Guide.
To authenticate with Apple, you need to chose between a token based or a certificate based connection trust. Both methods also require your client to support HTTP/2. Each notification is then delivered as a JSON payload to this endpoint.
In this article, we chose the certificate based trust. Luckily for us, HTTP/2 is already a supported protocol by the standard Go library.

Google Cloud Pub/Sub

Cloud Pub/Sub is a fully managed service for scalable real-time messaging. It allows you to decouple different systems by putting a message queue in between. The Pub/Sub service is guaranteed to deliver published messages at least once to one or more subscribers.
We can then have our service subscribe to messages for a topic that push notification messages are published to. Messages doesn’t necessarily have to be published from servers residing on Google networks. As long as we provide credentials to the publisher, it can be from anywhere on the internet.

Deploying with Google Container Engine a.k.a. Kubernetes

Go is a great language for creating small (and large) services and allows us to create highly efficient subscribers to consume messages to be sent from the Pub/Sub topic. This is also the language that Kubernetes and Docker is developed in.
See the full source at github for this subscriber deployment. I’ve setup a build process for this repository through the docker hub, you don’t have to build the container yourself. It’s publicly available as joonix/apn.
If you don’t already have a kubernetes cluster, you may create one using the gcloud command.
gcloud cluster create yourclustername --scope=cloud-platform
It is important that the cluster is allowed access to the correct scopes when creating it or it won’t be able to access the Pub/Sub service. You may also configure the access keys manually, but that is outside of the scope of this article (pun intended).
The next step is to provide our Apple developer certificate. This is required to authenticate with the push notification API when sending notifications. We do this by storing the contents in what is called secret objects in the kubernetes environment. This way, we won’t risk checking in the sensitive information to our git repository or other means of file sharing. How to retrieve these certificates in the first place is part of the documentation provided by Apple.
kubectl create secret tls apple-developer --cert cert.pem --key key.pem
The remaining configuration is less sensitive and can be set by using a config map. We need to set the Google Cloud project name for Pub/Sub as well as the bundle ID associated with our developer certificate.
kubectl create configmap apn --from-literal=project=<your-gcp-project> --from-literal=bundle=<your-apple-bundle-id>
Finally we can deploy our service, I’ve created an example deployment that can be used with kubectl. It will read the above configurations and certificates and provide it as parameters and files for our apn service. There shouldn’t be any additional modifications required, you simply let kubectl load the deployment into the cluster:
kubectl create -f deployment.yaml

Testing it out

To test that your service is able to consume messages from the Pub/Sub topic and send them as a notification through Apple, you can use the integration test example.
As the test needs access to the real thing, you will need to authenticate.
gcloud auth application-default login
This makes the API credentials available similar to when running on container engine, no additional credentials needs to be provided to the application.
Run the test with provided device token and project information:
go test -tags integration -token="<token retrieved from your app>" -project="<your-gcp-project>" -topic="notifications" -run TestPubsubNotificationIntegration
A message will be published on the specified topic which can hopefully be consumed by the apn service. If everything goes well, you should see the notification on the target device.

Last words

For docker hub compatibility reasons, I’ve made the container depend on the golang docker image. As this image includes a full Go installation and dependencies for compiling, it becomes much larger than necessary.
In a production environment where you might care about image size and start up times, you could instead use a 2-step build process. First step for compiling the binary and second step for building a much smaller container that only contains this binary. The Dockerfile in the second step could use FROM scratch or the popular FROM alpine. Alpine enables other useful packages that might be necessary such as root certificates for TLS, without adding too much overhead as other popular distributions would.

Monday, January 23, 2017

Client Side Application

It is safe to assume that there are millions if not billions of servers out there that are spending precious CPU cycles to do rendering of web pages. Only to be consumed by clients that are way more capable than parsing a pre-rendered HTML document, maybe even more capable than the backend instance that rendered it.
Well what’s the alternative you say?

Single Page Application, SPA

A SPA typically consists of a single HTML document that loads a JavaScript that will do the heavy lifting of presenting structured data. This is not a new concept, what is new are the many frameworks and evolved tools that make it more convenient and practical to implement.
Modern JavaScript deployments are usually packed together into a single file using Node.js for compilation. This happens before serving any clients, meaning no additional compilation or rendering will be required by the backend server. It also opens up possibilities for transpiling a modern ECMAScript syntax that still will be compatible with older browsers.
A great example is the default React environment. It allows the developer to write in Javascript based on ECMAScript 6, as well as an optional syntactic sugar called JSX. This is all compiled into one file by using Webpack.
The component based way of developing in React is attractive and makes it easy to separate different concerns. There’s also no difference in creating a self contained application or an application that consumes external APIs for serving the result. Thus it’s scalable in a way that makes it possible to start out without a dynamic backend and add to it further down the road if needed.


Static Site Generator

A different approach than letting Javascript route the content is a static site generator. This will generate an actual HTML file for each page, just like in the old days. A templating language is used to specify metadata and logic for how these files should be generated and populated.
An example in this area is Hugo. The documents will be generated before serving any client requests, much the same way as the Javascript bundle example above. But that’s where the similarities stop. In Hugo, you can write your content using Markdown with additional metadata using Front Matter as well as templating using Go html/template.
This approach makes it possible to completely skip Javascript for security or simplicity reasons, however there’s nothing stopping you from including some components from Javascript. For example a commenting system for your blog posts such as Disqus. Hugo already comes prepared for this and have many examples in their extras documentation.


Benefits

Moving the complexity up to the client makes it easy to have a very scalable and secure backend, while also keeping it cheap and simple.

Security

By pre-compiling content like this, there will be no runtime interpreter to inject arbitrary code into, no databases to SQL inject and no dynamic processing to overload with a DOS attack. In the case of running Javascript, you will still have to take care for XSS attacks. Other than that, it’s pretty much impossible to hack your application. In comparison, the keyword Wordpress gives 152 disclosed vulnerability entries at the time of writing, only accounting for backend type of vulnerabilities.

Scalability

Since your application is static as far as the web server goes, you can easily host it on any available hosting provider. For example Google Cloud Storage allows you to upload your files and associate them with your custom domain. Another example is github pages that are completely free for open source projects.

Flexibility

As mentioned previously, if you need dynamic functionality you can always mix and match from different 3rd party APIs that is called to by the client. If you still need to develop your own functionality that relies on content that cannot easily be included client side, you can do so by exposing your own API. This will then be consumed by your application the same way as you would include any 3rd party.
A maintenance free environment for developing your own API is Appengine. Appengine let’s you develop your backend in a sandboxed environment that makes it inherently secure while also ensuring scalability. In the unlikely event of a failure, you can still present the clients with relevant information that is part of the client side script or have a backup solution from other API endpoints.
Having a flexible architecture allows you to focus on a responsive and cacheable presentation while including business functionality as isolated building blocks, possibly from multiple different endpoints.

Saturday, December 17, 2016

The Battle Of The Clouds

They come in many flavors, all which claim to be a “cloud” provider. But really, what is the cloud? Is it to host everything on a Linux server like yesterday, but virtual? What is a virtual server? What benefits does it give me to put the world virtual in front? That depends.

Virtual Private Server, VPS

This is what is easiest to grasp when coming from hosting and managing your own server. It has a configured harddrive, some amount of RAM and a CPU. You usually manage it the same way as you would have done with a traditional dedicated server, the difference is that the cloud provider can squeeze more customers into the same hardware. This makes it possible to get a cheap VPS, which has the same capabilities as a dedicated server.
The features after that really depends on what provider it is. Some might let you add and delete virtual hard drives or change the amount of RAM and CPU allocated within the actual hardware it is running on. Some use it as a way to hide the fact that you’re really not getting all that CPU but share it with other customers behind the scenes.

Infrastructure as a service, IaaS

Now it gets a little bit more interesting. Here we’re not only virtualizing the server but we’re also doing virtual networks and interfaces, adding metadata services to do introspection about ourselves and the environment we run in, creating groups of multiple servers (now called instances) that often collaborate and scale together in a dynamic fashion, often with a load balancer in front. This is also accompanied by APIs that can be used to control all this, making for a great opportunity to automate. But similar to using a single VPS; we’re still focusing on building a network of computers, the infrastructure. Not on what will run on them.
A great example of IaaS is Google Compute Engine

Containers as a service, CaaS

As a fairly new concept, CaaS falls somewhere in between IaaS and PaaS (see below). Here we’re starting to abstract away the details of the infrastructure. We’re now thinking more in terms of services and application configuration, neatly packaged into containers. We think less of any particular server instance that might currently be running it, instances come and go (sometimes whole groups of them) come and go in any given moment, while services stay.
See Google Container Engine for more information about automated container management.

Platform as a service, PaaS

Here we have almost stopped caring about what runs our service, we can spend all our attention on the core parts of our business logic we’re trying to develop. All details of distributed logging, HTTPS load balancing, auto scaling depending on load etc is already solved for us.
This level of abstraction is great for getting a scalable product out the window quickly and to not have to worry about the operational tasks. It usually have a steeper price tag than the lower levels explained above, but we also have to take into account all the time that is saved. It’s not uncommon to have to put At least 50% of the time into operations tasks when managing the full stack by yourself.
You can try App Engine for this type of platform.

Software as a service, SaaS

This includes fully managed products such as monitoring and debugging dashboards, authentication solutions, invoicing services and analytics. We can benefit from these services instead of having to host and develop all of it from scratch or hosting it ourselves. We can keep to our core business domain instead, it’s the ultimate time saver but it also brings the highest cost in form of usage fees and subscriptions.
See for example Stackdriver on how to outsource the monitoring of your services.

Conclusion

There are many levels of abstraction to choose from, and many more abbreviations than explained here. What a cloud provider offers can vary greatly, so can the implementation details behind them. But in general when selecting what solution to build on, take into account what stage the development is in, the complexity of the problem and experience of the developers. Try to focus on the problem at hand and benefit from existing solutions as much as possible.
As an example, I like to start with building on App Engine, moving parts to Container Engine when more flexibility is required and finally leverage the ultimate flexibilty from Compute Engine for any remaining requirements or optimization tasks.

I believe a common misconception when moving to the cloud is to try and solve the same problems of yesterday in the new environment, instead of making it cloud aware. Take the traditional Wordpress service as an example, it’s easy to start thinking about a need to share the disk volume between many instances. Or that PHP sessions requires a sticky route on the load balancer. This type of setup is not built with a dynamic cloud or distributed service in mind.

The state does not necessarily have to be shared directly by using the disk volume. We could use a separate database (possibly managed by the provider) for saving blog entries, object storage for uploaded media and memcache for sessions. Now we’re only left with the rendering of the pages or the “Wordpress API”, which is much easier to scale and does not rely on its local disk for sharing state or the complexity of synchronizing this volume between multiple instances. We could take it even further and move the sessions and rendering up to the client, only fetching the actual content from an exposed API. But that’s a different blog post.

You can try all of these different solutions for free with the USD $300 trial at Google Cloud, happy hacking!