Blog
20 January 2017 / /
Managing Servers at Scale
A common proverb in cloud-era “devops” is that we should manage servers like cattle, rather than treat them as pets. The approach is to build systems are made to be similar, deployed and configured automatically, and administered at scale through defined processes. No longer should pieces of hardware or virtual machines be “special” — thus reducing the complexity of future maintenance, upgrades or expansion.
This is all well and good for a large-scale cloud provider with thousands of identical servers, but what about smaller operators? At Faelix, we use exactly the same kinds of automation to manage our much smaller network of routers, switches and servers. This makes our job keeping the infrastructure running far easier. Our CRM as a service is entirely managed in this way. But can we go further?
Many of our customers take a “part managed” or “fully managed” service from us: we look after the servers and — to some degree — the services running upon them. To that customer, their virtual machine is one of a kind: a “pet”. How do we square this uniqueness with an automated farm of servers?
We think that moose are a great parallel in the pets-vs-cattle analogy to try to explain how we help customers with their managed servers. Moose are usually independent or even solitary creatures, but do come together in groups. Moose can be farmed for milk, but require special attention. They are almost as fast as horses, almost as agile as cats, curious, are powered by renewable energy (plants and vegetables), and supplement their diet with salt and pond weed. And we think they are beautiful animals.
Introducing MOOSE
We’ve built something using SaltStack and Debian Linux to help us deploy and administer at scale the servers we provide for our part- and fully-managed hosting customers.