A Globally-Distributed Cloud Platform for the New Era of Distributed Computing

A globally-distributed cloud platform for the new era of distributed computing, or "My path to being greatest without making it more complicated (and expensive) than it needs to be"
An Interoute White Paper by Matthew Finnie (Interoute CTO)

Introduction
What more can I do?—Expanding the solution space
How far can I go in scale with the cloud?—Will I need to leave one day?
Keep making it easier
Interoute VDC – welcome to a platform engineered for the ambitious
Make it easier… to use Docker containers
One Integrated Digital Platform for any aspiration

 

Introduction

If you are creating a software product today you are on the "cusp of greatness" with almost the perfect platform to support your dreams and ideas. The public cloud as we know it offers developers and businesses the ability to precisely match their aspiration with the ability to expand, without building a field of dreams globally. This means that as you make money, your costs nicely are kept in line.

As if to emphasise the point we recently received the following request from a prospective customer:

"We are looking to launch a specialist web based app that requires a scale-able solution. The web app will have 'spiky' web traffic and because of the type of app, it is very difficult to predict what specifications we need for hosting."

"Classic cloud," you might say. Yes and No. Yes, mostly, but as we discover and become addicted to the simplicity and ease of using this kind of infrastructure to create platforms we get greedy and start thinking:

  • What more can I do with the cloud?

  • How far can I go in scale with the cloud?

  • I want all of the above and can it be easier than it already is?

What more can I do?—Expanding the solution space

Unless you are rock solid on your IT consumption and demand, the cloud-buying model is too good to ignore. We are now, rightly, addicted to the flexibility and immediacy of infrastructure that's on demand, can turn and off in an instant and can allow me to serve global markets at the click of a mouse. I can have an idea and crack on immediately. No waiting for a server to be dragged out of the store or worse shipped across continents. But we want to do more.

Consider our typical customer above. It turns out that with many applications the "specialist" characteristics mean that the classic public cloud model (which I would also call the "somewhere in the cloud" model) doesn't always work. And this applies to a whole set of enterprise-type applications as well as the recognised "specialist" sectors, such as financial services (FINTECH), healthcare, high performance computing, or rich media services.

"Specialist" requirements may be things like needing data to remain within country boundaries, or data not to be on a shared platform, or network access to be natively private, or latencies to be sub 10 milliseconds to allow for higher throughput.

Latency in truth is the scourge of many cloud-based applications. Whether it is your proven and reliable application that you are putting "in the cloud" to take to a wider audience, or even your new very cool database backend which you find doesn't work well when you are sharding to accommodate for the transfer times between your cloud computing zones.

So we want to do more and expand into new areas—this impacts on the required characteristics of the cloud platform. It is more than static content (as in CDN), it requires bringing processing closer to markets and users, or other elements of the platform. Ideally, the cloud also needs to be not always public by default. You need a private by default option (such as the one offered by Interoute). Why can't you have private cloud security and control but public cloud elasticity and flexibility?

How far can I go in scale with the cloud?—Will I need to leave one day?

Everyone developing an application should believe they are on a path to greatness that will ultimately scale to a global audience. Your success will lead to scale-predictable loading and therefore you will start to believe you could do things cheaper by buying separate network, data centre capacity or colocation, etc. Leaving the comfort of the public cloud could also be necessary if your cloud supplier becomes your competitor in the SaaS space. (Aside: as someone who tried competing and was bought out by a large partner/supplier, my advice is don't believe the the cooperation mantra; it's a simple who's-bigger fight.) Of course you may seek to be bought by said supplier, which can be an admirable end to your development journey.

The other reasons for thinking of going beyond the (public) cloud are often cited as being about control, that is, your need to be able to do more.

The key point about growth and IT infrastructure is that you want to retain flexibility and cost as you benefit from the success of your application. In simple terms it's down to the provider to scale with you. At Interoute, for example, we see our job as a provider as being to keep you going as long as we can.

This necessitates choosing a supplier who has scale and capacity reserves, i.e. buy from someone a lot bigger and preferably not a potential competitor. That has been a familiar experience for Interoute, as some of our largest customers started out as cloud customers and ultimately progressed to buying ground services like dark fibre and data centres (entire buildings) and signing up for 10+ years.

The key thing I want to say is that there is a journey. Bear in mind the scale where it makes sense to want to own your infrastructure is measured by companies with many billions of dollars of revenue and in one case over 1 billion subscribers. So there is a lot of room before you have to get the hard hat on and start lighting fibre over 1000's of kilometres of European countryside and cities.

Keep making it easier

The desire on one hand to expand the breadth of applications that can be migrated to the cloud should not come at the expense of simplicity. The two have to go hand in hand. The "price" of achieving cloud simplicity shouldn't be service complexity or operational instability. The evolution of platform APIs and abstraction and integration of functions should keep your IT efficient, and reduce your costs.

Developments at the application layer like Docker, Hadoop and NoSQL databases offer impressive flexibility and resilience in speeding up application development at scale. Application-based resilience like with NoSQL, Hadoop or even nowadays Microsoft SQL calls for the use of distributed pools of processing. But this simplicity across processing pools is not always matched by efficiency at the "inter-process communication" layer.

Interoute VDC – welcome to a platform engineered for the ambitious

The challenge of classic cloud services is that, unlike the applications they serve, the services are seemingly stuck at an earlier stage of integration and evolution. Yes the level of sophistication over the ability to create virtual machines has changed beyond recognition for the better. But classic cloud is still basically a massive scaling-up of the traditional construct of Internet-facing data centres: maximise the scale and generalise the offering to serve as broad a market as possible. Classic cloud is tied to a model that separates the massive pools of compute from the "dumb network" that joins the pools to each other, or joins the pools to users and end-users.

While you are abstracting at the application layer, the classic cloud infrastructure has to provide overlay networks on top of its dumb networks, to make the whole thing function. Typically people are stuck using firewalls to separate domains overlaying the Internet, a practice that is slow, complex and expensive.

The Interoute approach has been different. To make distributed computing possible we made the real network "smart". And then we eliminated the need for you to know anything about how it works, and simply to trust that it will work for you, and you get it for zero cost.

The idea is simple. In classic cloud all networking is implicitly public. You create overlay networks using technologies like STT, GRE or IPSEC to create relationships between the different locations. And you use firewalls as your routing control.

To increase the simplicity of building relationships between processing pools we architected the Interoute cloud to be connected. The classic data centre model is "public by default": you massively aggregate processing through a "gateway" switch infrastructure out to the open Internet. At Interoute we built the hardware supporting our cloud into the network… literally. We build and allocate 5 to 10TB of RAM per 20 to 40Gb per sec of backbone capacity. The Interoute global backbone runs MPLS natively which allows us to create global private domains for any connected VLAN asset in that domain. Our ISP backbone is one MPLS domain globally. (For the technically-minded, this means that we have built an entire global backbone according to RFC2547.)

So what does that mean for our cloud users? We give you a global network of computing in which you can create virtual machines in any of 13 global "Virtual Data Centre" locations. The network is private by default. So you can create your own infrastructure on your private backbone, and with Internet breakout in every location. This year we will add three more locations, making a total of 16 Virtual Data Centres by the end of 2015.

Make it easier… to use Docker containers

Unlike public cloud, Interoute's connected cloud network is private by default, low latency and very high capacity, and this makes building those new resilient applications simple. The idea is to expand the solution space for our customers, increase the scale and keep it simple.

I realise this advantage will not be readily apparent to many applications developers. So let me explain it for the case of Docker.

We recently experimented with Docker and the NoSQL database, MongoDB. The idea was to see how fast we could deploy a global MongoDB cluster from start to finish. And how resilient could we make it, using as many zones as possible?

Docker has three wonderful attributes that make it really easy to build cloud applications. Firstly it is lightweight – each (Linux) container is about 10% of the size of a typical Linux OS (300 to 500 MB instead of 5GB plus), so creation times are short (fractions of a second). Secondly, containers are portable (they do not depend on any underlying OS: so you build the application configuration that you need once and simply instantiate it into containers anywhere). Thirdly, Docker inherits the network addressing of the underlying VM infrastructure where it is running.

The third point is the interesting one so far as what is possible with Interoute VDC. Docker can give you simplicity, efficiency and portability, but the networking can only be correspondingly simple and efficient when the underlying network infrastructure is simple and efficient.

For our demonstration, we created an infrastructure of 9 nodes in three zones (Los Angeles, London and Berlin):

The Interoute Virtual Data Centre has multiple ways of creating your CoreOS platform from simple point and click to a custom template creator, or API driven.

The spin up time for the 3 node per zone, 9 node MongoDB cluster was an impressive 1 minute and 30 seconds (that means deploying the 3 CoreOS VMs from scratch and then launching the MongoDB containers on the top). But the real magic is apparent when you look at what you have got in network terms:

As far as the containers are concerned they have local IP addressing ("RFC1918", as the network admins would say). What happens below that in terms of routing, security and throughput is abstracted; the containers don't know about it, nor do you (unless you want to). And remember, this is real physical and private network, not an encapsulated and emulated overlay of the Internet.

To complete a realistic production environment you would want to add Internet breakout at one or several zones. Those breakouts go directly into the Internet at the geographical location. This ensures lower latency for your users and end-users. And when you need your application to reach other geographical territories, extend your private backbone to the nearest VDC zones simply by adding virtual machines in those locations. Your core application architecture has global reach built-in, and can be scaled as and when it needs to be.

So with Interoute VDC you have got:

  • No IP addressing headaches

  • No routing needs to be setup by the customer

  • No complex security is required—networks are private

  • Performance is optimal due to connection via backbone network with huge bandwidth and low latency

  • No charges for bandwidth

The "net(t) effect" is that your application development just got a whole lot simpler and the application development space a whole load broader.

One Integrated Digital Platform for any aspiration

At Interoute, we think we have created a superior alternative to classic cloud, due to a unique combination of network assets, integrated cloud computing services, and our approach to the market. Interoute's core network (owned and operated by us) provides a backbone that we sell to most of the major global cloud providers in Europe, and we serve enterprise customers across 125 countries. We have integrated into this global network a cloud platform, Interoute VDC, that allows you to build a global public cloud, to create a private cloud (on the fly), or to consolidate your legacy data centre infrastructures. Or all three. All with the familiar pay-as-you-go, instant access, instant spin up/down attributes of classic cloud.

Because we've built VDC into a global network it is everywhere. Unlike the classic cloud, we don't put all of our compute capacity into northern Europe. Our data centres are in places where classic cloud cannot reach, like Madrid, Milan, Berlin and Frankfurt in Germany, 3 locations in the UK, Paris, Amsterdam, and Geneva. Globally, we are operating in LA, New York and Hong Kong. With Singapore, Zurich and Chicago coming around the corner we give you a flexible cloud platform in more places with better performance than anyone.

When you feel ready to join the ranks of the largest companies in the world, Interoute has the scale and resources to cope. We don't do many things but we what we do—computing, network and communication, we do from the cloud all the way to the ground.