Fill one form and get quotes for cable assemblies from multiple manufacturers
Glenn Chagnot - Spirent Communications
Last winter, when the “Big Three” hyperscale cloud providers announced fourth quarter earnings, they revealed a surprising trend. After years of nonstop growth, the rate of public cloud adoption was suddenly slowing. Despite seemingly every enterprise, public agency, and telecom on the planet pursuing a cloud strategy, customers were scaling back cloud investment. What was happening? Had businesses discovered that all those cloud benefits we keep hearing about were just myths?
No, cloud really does improve speed, flexibility, scalability, and more. In the long run, just about every organization will run at least some of their applications in cloud, whether public, private, or hybrid. The dip in growth rates was just an inevitable correction to a common misperception among hyperscale customers: that “cloud” means cost savings. Facing a slowing economy and a need to trim budgets, many businesses took a closer look at their cloud spending—and realized it was far higher than anticipated.
Among other benefits, cloud can provide cost savings, but that doesn’t happen automatically. Moving to cloud means relying on someone else’s infrastructure for fundamental aspects of how your applications behave. If you don’t have a clear understanding of how applications will interact with the cloud environment ahead of time—and how to best optimize them for this new world—you might not get the performance you expect. And you might end up paying much more than you need to.
Navigating Cloud Complexity
When hyperscalers tout cloud benefits like improved response times, efficient scaling, and getting closer to customers, those advantages are all real. The problem is that the actual performance and costs you’ll see after migrating have very little to do with anything the cloud provider is doing, and everything to do with how you write and deploy your applications. Too many organizations still assume they can just hand applications over to a public cloud provider, and they’ll just work. But that’s not the case.
Think of migration like building a house. Anyone can go to Home Depot and buy lumber, bricks, roofing, but the same raw materials can produce vastly different homes. That’s also true in the cloud. Everyone uses the same basic building blocks, and hyperscalers can advise you to some extent, but at the end of the day, you are the builder. If you decide to put the kitchen in the basement and the only bathroom in the garage, you’ve still built a “house.” It has a roof. You won’t get rained on. But you probably won’t enjoy living there.
The importance of optimizing for cloud has only grown in recent years as the industry shifts to cloud-native architectures. Containers and microservices bring real improvements in flexibility and efficiency, but they also make software architectures far more complex (Figure 1). Even when businesses rewrite applications to be cloud-native—which is the correct strategy whenever possible—they often fail to ensure that they’re writing them to run efficiently. Add the wrinkle of hybrid deployments, such as on-premises applications that can burst into cloud, and things get even more byzantine. Your own elements now interact with cloud elements in highly complicated ways, making it hard to predict how applications will perform or what they’ll ultimately cost.
Figure 1. Evolution of Application Workloads
The only solution is to thoroughly test applications in new cloud environments before pushing ahead. But most organizations still don’t. Partly, that’s due to the continuous deployment mindset that many businesses have adopted. If you’re constantly pushing out updates, you don’t have to worry about how you deploy in the cloud, right? If something breaks, you’ll just fix it with the next update. But if you haven’t optimized for efficiency, continuous deployment won’t address that. Nothing is broken. Your users are fine. You’re just spending far more than necessary to keep them happy.
Competitive pressure to move faster doesn’t help either. It’s common, for example, for developers to write applications with very resource-heavy requirements just to get them working. But in the rush to market, they often skip the optimization step that’s supposed to come next. No one considers how much those excess resources actually cost until the bill comes much later, and often after the developer has already moved on to another project. But as more customers now realize, the meter starts running the moment you deploy. And the cost of inefficiency can get very high, very quickly.
Validation Before Migration
How can you avoid these missteps? The answer is comprehensive validation, so that you fully understand the performance, security, and resource consumption of every cloud workload in both peak and off-peak scenarios. Test for:
Cloud migrations get complicated, but the recipe for success is straightforward. Capture comprehensive profiles for all applications across compute, storage, networking, and more. Measure resource consumption, traffic volumes, latency, and other factors, so you can make informed choices about cloud options.
The good news is, once you recognize the proper role of testing, you can avoid costly cloud missteps. You can actually move faster, because now you can push out cloud-native software with confidence, knowing it will work intended for a price you expect.
Create an account on everything RF to get a range of benefits.
By creating an account with us you agree to our Terms of Service and acknowledge receipt of our Privacy Policy.
Login to everything RF to download datasheets, white papers and more content.