Pardon me if I seem a bit cynical, but the “hybrid cloud” is one of those IT terms that makes normal evolutionary measures seem like the cool, cutting-edge, thing to do. Cloud vendors such as Amazon, Google and Microsoft would like you to shut down your in-house data center and move all your infrastructure to their cloud – a so-called “hyper-converged” data center strategy. (BTW, be wary of any IT jargon that starts with “hyper” – if the industry could find a term that was even more hyperbolic than “hyper,” I’m sure they would use it.) When I was growing up, my mom bought food at the market, and later the supermarket, now, I buy food at a hypermarket.
Shifting Infrastructure
The hyper-converged data center strategy is a good approach for companies that are just starting out and don’t see any need to buy their own IT infrastructure in the first place. Products like iCloud, Dropbox, Amazon and countless other SaaS and web-based services were created as cloud-only products from day one, but that’s not the way the rest of the world has evolved. Since the 1950s and the era of IBM mainframes, companies have designed and operated their own IT infrastructures – and moving to the cloud isn’t a process that’s going to happen overnight. (It seems like all businesses are moving to the cloud, but are they really? Find out in How Much Are Companies Really Using Cloud?)
However, as much as Amazon, Microsoft and others want this to happen, there are many reasons, including the cost of rewriting applications to run in a new environment and the simple “if it ain’t broke, don’t fix it” mentality, which has hindered companies from the transition. That’s because when a company buys storage servers from EMC, IBM or whoever, they expect them to last five years or more as they depreciate over a similar period of time. Migrating that storage to the cloud would mean writing off equipment that is still on the books. Even though the cloud is cheaper, the hardware is a sunk cost, making it hard to justify taking a one-time financial hit.
So, inevitably, what happens is old systems that are still working stay on existing systems and new systems, or the expansion of existing systems, move to the cloud. And voila, you have a hybrid cloud with some processes on local storage, and some in the cloud. For example, one of our customers is a Hollywood movie studio that contains warehouses full of LTO tapes and a large robotic tape library. Because of the slow nature, high maintenance, and astronomical cost associated with their current method of storage, they want to get away from this technology and, eventually, migrate everything to the cloud. With the vast amount of data, however, they are keeping that system alive, but not expanding it – sending all their new stuff to the cloud and, by definition, operating on a typical hybrid cloud model.
Why Hybrid?
Putting aside the economics and inertia driving many of these decisions, there are some good technical reasons to maintain a hybrid cloud. Some processes require specialized or highly tuned equipment to run properly. An example would be video editing where massive files have to be manipulated in real time. The bandwidth that must exist between the storage and the processing is so high that there’s no really good way to make it work in the cloud.
Another definition of “hybrid cloud” is spreading work among several cloud vendors. Amazon and the others do their best to lock you into their environment by offering dozens or hundreds of cloud-based services, such that everything you need comes from one vendor. The way these vendors price their products, however, is incredibly punitive. For example, Amazon’s S3 cloud storage costs roughly 2.3 cents/GB/month for storage, but if you want to retrieve that data over the internet, you could pay up to 9 cents/GB to take it out. Clearly, they want you to leave it there and do everything within Amazon’s cloud.
In the long run, I don’t think such strategies work. IBM was famous for trying to lock customers into an expensive all-IBM environment. Then companies emerged that started to pick off parts of the IBM edifice. Notably, EMC started making IBM-compatible disk drives. Their pitch was simple: If you can unplug the IBM drive and plug in the EMC drive and it works exactly the same, why pay 30% more for the IBM drive? Today, there are cloud vendors popping up everywhere that are picking off pieces of Amazon’s fortress. Companies like Packet.net are offering high-performance low-cost computer-in-the-cloud. Fastly and Limelight offer content distribution networks that are faster and cheaper than Amazon’s, and Wasabi offers cloud storage that is 1/5th the price and six times faster than Amazon’s S3 storage. (For more on hybrid IT, see Hybrid IT: What It Is and Why Your Enterprise Needs to Adopt It as a Strategy.)
So, customers are saying, “I’ll do some of my processes in Amazon’s cloud, but I’ll use Limelight for content distribution, and I’ll store all my data in Wasabi.” That’s the kind of hybrid cloud solution that I predict will dominate for the coming decade. Customers are not going to put up with being locked into one vendor’s ecosystem when there are better, faster and cheaper alternatives for various functions.
In short, the hybrid cloud is here to stay. But the term “hybrid cloud” is likely to fade due to the hybrid approach being such a natural part of the evolution of IT infrastructure. Every data center has mix of hardware, like Dell, Cisco, Juniper, Netapp, etc., and nobody bothers to call it a “hybrid hardware environment.” So, the question has become: Will we really need to define a “hybrid cloud” when it will inevitably become the norm?