The ideal hybrid cloud is supposed to be a seamless distributed architecture in which data and applications automatically find the perfect collection of resources in order to deliver optimal performance to the user.
The reality, of course, is quite different. Most enterprises are still struggling with the configuration and migration issues just to create a basic, working hybrid architecture, let alone the complex automation stack that supports dynamic workflow balancing.
This may be part of the reason why public cloud adoption is growing so much faster than private cloud development. At the end of the day, it is just easier to port entire data environments to third-party infrastructure. (To learn about the different types of cloud services, see Public, Private and Hybrid Clouds: What’s the Difference?)
Challenging, but not Impossible
But while the challenges to a successful hybrid architecture are significant, they are by no means insurmountable. And since most enterprises have a vested interest in keeping some of their data close to home, we can expect to see a continued and sustained effort to improve the performance of hybrid architectures for some time.
One of the easiest ways to do this is through proper application management, says Apigee CTO Anant Jhingran. Shuttling data to and from geographically distributed data centers is bound to introduce delay no matter how advanced the network is. So simply choosing a cloud service provider that is close to the local data center goes a long way toward reducing latency.
But there are other ways to speed things up as well. A common problem is the habit of running backend applications and their related application programming interfaces (APIs) in the enterprise data center while pushing management and analytics services to the cloud. While less costly than hosting everything in-house, this tends to slow things down as the API continuously hustles back and forth between the two hosts. A smarter way to do this is to employ a lightweight, federated gateway that keeps the API runtime in the data center while asynchronously pushing analytics data to the provider. (For more on the cloud, see Grounding the Cloud: What You Need to Know About Cloud Services.)
But probably the biggest performance detriment to the hybrid cloud is the application itself. The fact is, legacy applications are simply not designed for the loosely coupled, virtual infrastructure that supports the cloud. This is why firms like Docker, Cisco, HPE and Microsoft have teamed up to help the enterprise update its app portfolio for the cloud. The Modernize Traditional Applications (MTA) program features application management tools, hybrid infrastructure and professional services designed to help organizations containerize their apps and deploy them to the cloud under Docker’s Enterprise Edition software. Most importantly, it does not require modification of source code, giving IT teams instant portability, security and other benefits regardless of whether they use Windows or Linux apps.
The Right App for the Right Architecture
More than likely, however, there will be certain processes that do well in the cloud, and others that do not, so the enterprise will have to go through a fair amount of trial-and-error to arrive at the optimal data ecosystem. The best way to accomplish this is through a methodical, data-driven process that weighs the various options available in a hybrid architecture against business requirements and their associated applications, says CiRBA CTO Andrew Hillier. Key questions to ask include:
- What principals should govern cloud usage?
- What criteria should determine where workloads are deployed and apps are hosted?
- What resources and cloud instances are needed?
- How will control be maintained for cloud-facing apps?
Expect this to be an ongoing transformation, driven largely by the continuous integration/continuous deployment style of emerging devops-oriented IT management models.
Hybrid cloud performance is also poised to benefit from the myriad technological advances that are affecting data ecosystems on a broader scale. As Chris Sharp, CTO of Digital Realty noted to Datacenter Frontier recently, technologies like artificial intelligence, neural networking and fine-grained data collection and analysis are bringing a wealth of capabilities to the cloud that far exceed anything found in traditional infrastructure.
A key development has been the rise of dedicated private networks, both physical and virtual, for cloud access. In the early days of the cloud, the primary applications were bulk storage and compute services delivered over the public internet. Not only was this slow and unreliable, but it introduced a new attack vector to sensitive data and resources. With a private network, organizations maintain stable connectivity to one or multiple providers, coupled with dynamic scaling and flexible rate scheduling to better match network resource consumption with fluctuating workloads.
It is understandable that, following the hype of the hybrid cloud’s early years, there would be a backlash against the technology now that the reality of deploying actual production environments is setting in. But as the dreams of hybrid nirvana start to fade, organizations can now get to the real work of deploying the infrastructure and optimizing performance.
And while it’s tempting to measure that performance against today’s suite of legacy applications, the hybrid cloud will in fact produce a data environment that is unique to itself, with services and operational characteristics that cannot be duplicated in either traditional infrastructure or pure public or private clouds.
In the end, the greatest performance benefit of all will be the ability to craft a data ecosystem that allows the enterprise to provide services that no one else on the planet can duplicate.