How to set up Rancher for container orchestration in VDC

Architecture overview
Create VDC networks for container orchestration
Create an Ubuntu Linux virtual machine in VDC, and install Docker
Install Docker
Start Rancher server in a Docker container
Quick start: Rancher server with a non-persistent database
Rancher server with persistent storage for the database
Connect to the Rancher server web interface
Turn on access control to Rancher
Activate the Interoute VDC machine driver in Rancher (and edit the API Proxy Whitelist)
Create the first Rancher host
Find out more



This document is a guide to setting up the basic networks and virtual machines in VDC to run a Rancher server, and the basic configuration to make Rancher usable.

Rancher is a very useful tool for managing Docker containers and orchestration systems. When combined with the advanced networking features of VDC you have a powerful and efficient container deployment platform at your disposal. VDC is also very cost-effective for distributed computing because you only pay for compute and storage resources consumed and there are no charges for network setups or data traffic over networks.

Architecture overview

In this document, the assumption is that you want to progress from a simple use of Docker, running containers on a small number of virtual machine hosts, to a more complex 'orchestrated' deployment of containers, running on a large and possibly elastic set of hosts, where you want to manage host machines in a more systematic and automated way, using an orchestration tool.

The Rancher setup requires a server virtual machine (which could also be configured across multiple virtual machines behind a load balancer, but we won't do that here) and a collection of 'Rancher host' virtual machines which are used to run Docker containers. Rancher uses a container orchestration system to manage the hosts; by default it uses its own implementation, called Cattle, but it also offers the Swarm, Kubernetes and Mesos systems.

You should not normally need to access or manage host machines directly via VDC because Rancher has all of the functions to deploy/delete and manage the VDC virtual machines. (But you may need to use VDC to clean up host machines that do not deploy or delete correctly.)

We are going to work with a default case of host virtual machines being deployed on private networks, with Internet egress enabled so that Docker engines on the hosts can access the public Docker repository (or any other repository that you have available). The private networks can be set up in different VDC zones of your choice, while the 'mesh-networked MPLS' feature of Interoute's backbone network means that these networks can be automatically inter-connected without any configuration, routing or tunneling work to be performed.

The benefit of the VDC architecture is that your Docker containers will run on truly private, high-performance shared networks in multiple VDC zones and you will not be relying on virtually private overlay networks running on top of Internet connections, whose performance will be variable and unpredictable.


The following are assumed:

Create VDC networks for container orchestration

The first step is to create networks to be used by the Rancher server and hosts. The following commands use Cloudmonkey and the VDC API, but creation can also be done with the VDC GUI. For detailed information about creating networks in VDC, see VDC API: How to create a network.

The following example commands use Stockholm as the VDC zone for the Rancher server, with hosts in the Stockholm and Frankfurt zones.

First, create a 'Local with Internet Gateway' network in VDC Stockholm. This provides the Internet access to the Rancher server.

(local) > createLocalNetwork displaytext='rancher-gateway-STO' zonename='Stockholm (ESX)' cidr= gateway=

After the virtual machine is deployed you will need to add port-forwarding rules on this network.

Next, a Private Direct Connect network in VDC Stockholm. You need to have a Direct Connect Group already created for this; see the detailed instructions for that.

(local) > createPrivateDirectConnect zonename='Stockholm (ESX)' gatewayServices=true cidr= gateway= displaytext='dockernet-STO' dcgid=39999

'gatewayServices=true' is required to create a private network with added Internet egress. Replace the value of 'dcgid' above with the id for your own DCG.


The choice of CIDR will depend on what private networks you have already created; CIDRs should be unique for each Private Direct Connect network in your VDC account. The gateway IP address must be '.254' for the (Internet) gateway services to work.

The assumption in the template configurations for Rancher hosts is that you will use private networks with CIDRs in the '192.168' class. If you use the '10' class CIDRs, you must not use '10.42' and '10.43' as these are used by Rancher and Kubernetes. Also, the template sets up routing using a '/16' rule, so you will need to use CIDRs in a single range '10.XX' for all of the 'dockernet' networks used by Rancher.


Finally, create a second Private Direct Connect network in VDC Frankfurt:

(local) > createPrivateDirectConnect zonename='Frankfurt (ESX)' gatewayServices=true cidr= gateway= displaytext='dockernet-FRA' dcgid=39999

Note that, by using the same DCG for these two private networks, they will have full inter-connection between the VDC zones (using the Interoute high-performance backbone network, with a standard throughput of 3 Gbps). No further configuration is required for this.

Create an Ubuntu Linux virtual machine in VDC, and install Docker

The Rancher server can run on many types of Linux virtual machine. The choice is not crucial, because it actually runs inside a Docker container, Ubuntu or CentOS are common choices. The following example uses Ubuntu 16.04.

Since Rancher is a web-based application exposed to the Internet, running inside a Docker container, it is important that operating system and Docker updates (security updates at the very least) are regularly applied to the virtual machine.

This command displays the UUIDs of the newly-created networks which are needed for the virtual machine deployment:

(local) > list networks filter=id,displaytext,zonename
|           displaytext            |                  id                  |      zonename     |
|       rancher-gateway-STO        | b52c8831-c997-4a68-b06d-3bbbbf9dfd6a |  Stockholm (ESX)  |
|          dockernet-STO           | 87311b3c-a055-48c5-84fb-b4196552084a |  Stockholm (ESX)  |

One more piece of network information is the UUID for the public IP address.

(local) > listPublicIpAddresses filter=id,ipaddress,zonename,associatednetworkid
|   ipaddress    |         associatednetworkid          |                  id                  |      zonename     |
|  213.XX.XX.85  | b52c8831-c997-4a68-b06d-3bbbbf9dfd6a | fc012a1f-94fb-4e2a-8c34-96320df19741 |  Stockholm (ESX)  |

Check the 'associatednetworkid' to match the UUID for 'rancher-gateway-STO' above. 'ipaddress' shows the public IP address to be used later to connect to the Rancher server.

The following command creates an Ubuntu 16.04 virtual machine in VDC Stockholm, with 2 CPUs and 4 GB of RAM. The template used is 'ubuntu16-ranchernode' which contains some special setup for the network routing. If you use a different template then manual routing configuration may be required:

(local) > deploy virtualmachine networkids=b52c8831-c997-4a68-b06d-3bbbbf9dfd6a,87311b3c-a055-48c5-84fb-b4196552084a displayname=rancher-server-STO name=rancher-server-STO zoneid=e564f8cf-efda-4119-b404-b6d00cf434b3 templateid=4354ce79-3977-4245-bea0-fa619af54101 serviceofferingid=804d5c83-5019-4a6f-8341-55eae6e289dc

From the output, make a note of the UUID of the new virtual machine, and the root password (which cannot be found out later). If you want to avoid passwords and use SSH key pair authentication, see VDC API: How to use SSH key pairs.

As final configuration, portforwarding rules must be created for the new VM on the 'rancher-gateway-STO' network: SSH access via port 22, and web access to the Rancher server via port 8080:

(local) > create portforwardingrule protocol=tcp virtualmachineid=UUID ipaddressid=fc012a1f-94fb-4e2a-8c34-96320df19741 privateport=22 publicport=22 openfirewall=true
(local) > create portforwardingrule protocol=tcp virtualmachineid=UUID ipaddressid=fc012a1f-94fb-4e2a-8c34-96320df19741 privateport=8080 publicport=8080 openfirewall=true

Install Docker

Connect to the Rancher server VM 'rancher-server-STO' with SSH, using the public IP address (it is xxx'd out here for privacy reasons):

$ ssh

Install the 'curl' program:

ubuntu@rancher-server-STO:~$ sudo apt-get install curl

And install Docker:

ubuntu@rancher-server-STO:~$ curl | sh

The install process takes a while; check the output messages for any errors.

The following command is useful to avoid needing to use 'sudo' to run Docker commands:

ubuntu@rancher-server-STO:~$ sudo usermod -aG docker ubuntu

You have to log out and back in for this to take effect.

If you want to check that Docker is working correctly, try:

ubuntu@rancher-server-STO:~$ docker run hello-world

Note: Rancher Server will run with Docker latest version, and host VMs can also run this version, except if you want to use Kubernetes orchestration which requires Docker version 1.12.x.

Start Rancher server in a Docker container

Quick start: Rancher server with a non-persistent database

The following single command is useful to test that Rancher will work, however the database content will only exist while the container exists, so it is only useful for testing purposes.

Docker will pull by default the image 'rancher/server:latest' which is not guaranteed to be stable. Therefore start the Rancher server with the 'stable' version like this:

ubuntu@rancher-server-STO:~$ docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable

And you can check that Rancher is running:

ubuntu@rancher-server-STO:~$ docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                              NAMES
1d735e040da7        rancher/server:stable   "/usr/bin/entry /u..."   35 seconds ago      Up 33 seconds       3306/tcp,>8080/tcp   cranky_mayer

Rancher server with persistent storage for the database

Rancher uses a MySQL database to store all of its configuration information. A simple way to create a persistent database which is independent of the Rancher server container is to use a Docker volume as follows.

Create a new Docker volume:

ubuntu@rancher-server-STO:~$ docker volume create mysql_vol

And direct the Rancher server to use this volume as the location for the database:

ubuntu@rancher-server-STO:~$ docker run -d -v mysql_vol:/var/lib/mysql --restart=unless-stopped -p 8080:8080 rancher/server:stable

You can create a variety of resilient or HA (high availability) database architectures for Rancher. See the Rancher documentation for details.

Connect to the Rancher server web interface

From this point onwards, you should be able to do everything via the Rancher user interface. Start a web browser and enter this URL:

using the public IP address of 'rancher-server-STO'. You should see the welcome screen of Rancher:

Turn on access control to Rancher

The Rancher server starts with no access control so you must immediately turn that on.

For the simplest access control, create a login name ('admin' for example) with a secure password. Other options are authentication via Github accounts or another compatible authentication service.

Click Admin in the top menu and select Access Control. Click the Local option in the top row of icons:

Click the Enable LocalAuth button to turn on the access control.

Activate the Interoute VDC machine driver in Rancher (and edit the API Proxy Whitelist)

The 'machine driver' for Interoute VDC is already included in Rancher however it needs to be activated.

Click Admin in the top menu and select Machine Drivers. Click the Activate button for 'Interoutevdc':

You also need to edit the setting 'api.proxy.whitelist'.

Click Admin in the top menu and select Settings. In the 'Advanced Settings' section, click the text I understand that I can break things by changing advanced settings. In the list of settings find 'api.proxy.whitelist' and click the yellow Edit (pencil) button:

Add the text ',' and click Save.

Create the first Rancher host

Host virtual machines can be deployed into VDC from the Rancher interface, which (after the above activation) has a script to send commands to the VDC API.

Click Infrastructure in the top menu and select Hosts. Click Add Host.

For the first time of adding a host, you need to set the 'Host Registration URL'. We want the hosts and server to communicate via the private networks, so click the Something else option and enter a URL which uses the private network IP address of 'rancher-server-STO'. In the example setup this looks like:

Click Save.

Now you will see the 'Add Hosts' screen. Click InterouteVDC in the row of icons at the top to select Interoute VDC as the cloud platform for the new host.

Insert the API Key and Secret Key for your VDC account, and select the VDC Region for the deployment:

Click Authenticate.

Next, select the Availability Zone. In the example we have got host networks in Stockholm and Frankfurt, so these would be the possible choices. For any other VDC zone that you want to use, you need to do the network creation as shown above.

Click Continue.

Select the Network, which needs to be the private network for the selected Availability Zone, 'dockernet-STO' for the example:

Click Continue.

At the next step, select the Template Type Public, and click Continue.

Select the Template to be used from the list. (Only a few are available currently, this will be expanded.)

Click Continue.

Set the Service Offering for the virtual machine: the number of CPUs and the amount of RAM:

Click Continue.

(Optional) Select a Disk Offering. This is an optional Data Disk for the virtual machine which can be used to create persistent storage volumes for Docker to use. Click Continue.

Under the Instance heading, type in the Name for the new host(s). Optional: use the Quantity slider to select how many hosts to deploy of the selected configuration (note: it is possible to quickly 'clone' deployed hosts later if more hosts are required in a VDC zone).

(Optional) At this point, you can modify the Advanced Options, which determine the version of Docker to be installed in the host virtual machine, and various settings for the Docker engine.

When ready, click the Create button to begin the deployment of the virtual machine(s).

The Hosts screen in Rancher will show the current status of each host. Here is how it will look initially:

After several minutes, the deployment should complete and if all is well the Hosts screen will display:

Rancher will start several 'system stacks' in each host for managing the host and the network connections. Untick Show System to hide those containers from view.

If there are errors in creating the hosts, this will be reported in red status messages. Note that it is possible for a virtual machine to be deployed in VDC which Rancher does not detect. In case of errors you should always check in VDC if an 'orphan' virtual machine has been deployed, and delete it before continuing in Rancher. Also check for an SSH keypair named as the hostname that is in error, and delete it.

Find out more