Skip to content

Category: Docker

Private DNS for Native Windows Docker Container

Docker Windows containers have a number of shortcomings, particularly around networking. One showstopper is that it doesn’t use the DNS of its host server. The expected behaviour in (Linux) Docker containers is that the Docker engine creates a virtual DNS for containers. The Docker DNS resolves containers by name (for Docker Swarm / Docker Compose) or delegates to the host DNS configuration. There are options to override this behaviour if necessary.

Native Windows containers don’t do this. Docker for Windows will resolve container names from the Swarm and will then use the default external DNS (Google DNS on 8.8.8.8) to resolve external addresses. It will not use the host machine DNS settings nor can its behaviour be overridden with the --dns flag. This is a serious problem if your container depends on services within a private / corporate network.

This appears to be an issue with the Docker Windows images (nanoserver / windowservercore) rather than with the engine. Microsoft might get round to fixing it but given its half-hearted support for Docker, it might not.

Deploying to Google Kubernetes Engine

Previously we looked at building a Spring Cloud Data Flow on Kubernetes. As a follow up, we’re now looking at deploying to Google Kubernetes Engine. The great thing about Kubernetes you use exactly the same commands to manage a cluster on your laptop as on a server or cloud compute platform. Google has first class support for Kubernetes on the Google Kubernetes Engine so deploying the Primer application was very straightforward.

Spring Cloud Data Flow on Kubernetes

Spring Cloud Data Flow is a powerful tool for composing and deploying message driven data pipelines. It allows us to compose simple Spring Cloud Stream applications into complex processing pipelines. It also takes care of deploying these pipelines into Kubernetes or into Cloud Foundry.

It’s powerful but has a lot of moving parts. It can be daunting to get a simple pipeline running. This article introduces the Primer demo for SCDF and describes how to deploy it into Kubernetes on a local development machine.

SSH into a Docker Container

Just sometimes, it’s useful to SSH into a Docker Container. While docker exec or docker attach are usually sufficient to run commands in a container, sometimes you specifically need SSH. For example, to connect directly from a remote machine or when an application needs to run commands on your container. Most Docker images don’t come with the SSHd service installed so it is not possible to SSH to them. This post demonstrates how to install and run the SSHd service to an existing image so that you can connect to it.

Activate the Docker Maven plugin when Docker is present

The wonderful docker-maven-plugin from Spotify is a great way to build Docker images from Maven. If you bind it to Maven phases, it can be used to make a one-step build of a project artifact and its Docker image. For example, if you bind the Docker Maven plugin’s build goal to the Maven package phase, it will create your Docker image when you run a standard

mvn clean install

command. That’s neat, but the drawback is that the build will fail entirely if Docker is not available on the build machine. This somewhat goes against the Maven ideal of portable builds – we don’t want a build that works on my machine but not yours.

We can workaround this problem by making the Docker build optional and enabling only if Docker is available.

Building, tagging and pushing Docker images with Maven

A standard use case for Docker is to build a container to run a pre-built application so that the containerized app can be run on any Docker enabled host. The application and the container are sometimes developed and built separately. First the application is built, then a container is defined and built to include the application. However, it can be better to promote the Docker container to a first-class build artifact. That is, the build process always builds the deployed component and its container at the same time. This saves a manual build step and also ensures that the Docker container is always up to date with the latest application build. It allows us to easily develop and test against the Dockerized application directly – every build results in a new deployable container.

There are a number of ways to do this. This article looks at hooking the Docker tasks into the Maven build process.

WebSocket push notifications with Node.js

The Node.js website describes it as having “an event-driven, non-blocking I/O model that makes it lightweight and efficient”. Sounds lovely, but what’s it actually for?

Modulus’s excellent blog post – An Absolute Beginner’s Guide to Node.js provides some rather tasty examples. After covering the trivial examples (Hello world! and simple file I/O), it gets to the meat of what we’re about – an HTTP server. The simple example demonstrates a trivial HTTP server in Node.js in 5 lines of code. Not 5 lines of code compiled to an executable or deployed into an existing web server. 5 lines of code that can be run from a simple command. It then goes on to describe the frameworks and libraries that let you do really useful stuff.

This looks just the thing for implementing a new feature in the Spanners demo app: push notifications to all logged-in users when a spanner is changed.

Docker Part 4: Composing an Environment Stack

This series of articles on Docker has so far covered a number of examples of creating and running individual Docker containers. We’ve also seen an example of how multiple Docker containers can be linked together using the –link  command line flag.

Best practice for containerization suggests that each container does exactly one job. A full environment stack for a complex application may comprise many components – databases, web applications, web/micro services – each requiring its own container. Setting up the full working environment stack may require several lines of docker run commands, run in the right order, with just the right flags and switches set.

An obvious way to manage this is with a startup script. A neater solution is to use Docker Compose. Docker Compose allows multi-container applications to be defined in a single file and then started from a single command.