Technology•Aug 13, 2020
Cloud Adoption Pitfalls to Avoid Part 4: Over-Provisioning, Data Sprawl, Egress Charges, and Runaway Costs
Moving to the cloud can be a challenge for any organization, especially if you don’t pay attention to the pitfalls. Some of the challenges in cloud adoption include the trap of resource over-provisioning, egress charges, data sprawl, and runaway costs. We’ll show you how to avoid these hazards and make the cloud work for you. This is the fourth of a five-part series focused on cloud adoption and common pitfalls organizations experience in this journey. In part one, we discussed as-is lift-and-shift to the cloud and how it can bloat cost. In part two, we discussed how designing around the first workload and the ease of click to provision can lead to painful consequences. In part three, we discussed how to manage complexity, application dependencies, and competing priorities to streamline the cloud adoption process.
When managing on-premises infrastructure, organizations are often required to purchase more hardware than is necessary for everyday operations. Over-provisioning can be a necessary evil in order to safeguard against sudden peaks in demand on servers and applications and to ensure there is capacity for future projects. Bringing that same mentality to the cloud, however, often unnecessarily inflates costs and leaves administrators, IT directors, and CIOs scrambling for ways to bring down their cloud bill. We’ve outlined four ways to avoid over-provisioning below:
1. monitor utilization
Whether you are planning on migrating to the cloud or currently in the cloud, it’s a good idea to monitor your current utilization so you can size or resize appropriately. The goal for most organizations is to run lean and lower the recurring costs to only the resources needed. Start by performing a review of your application design and application inventory. From there, identify any unnecessary services or services consuming more resources than they need.
2. validate and test sizing
Once you’ve determined target resource sizing, be sure to perform adequate testing and validation to ensure applications are running optimally. There are many utilities that AWS and Azure offer to help you get an additional insight into your cloud utilization. Both AWS CloudWatch and Azure Monitor can collect performance metrics and log data from on-premises and cloud resources.
3. leverage auto-scaling features
One of the primary features of the cloud is its incredible elasticity. Both AWS and Azure offer autoscaling capabilities that allow you to monitor your applications and automatically adjust capacity when load increases beyond the thresholds you define. This can mitigate the need for over-provisioning resources to handle peaks in load. You only have to pay for the additional resources when you need them, lowering your overall cloud bill.
4. leverage reserved instances for compute
Reserved instances are recommended for organizations with steady or continuous workloads and can help organizations save up to 70-80%. These significant savings sound astonishing; however, this is an upfront commitment for a cloud service because this model is a “use it or lose it” type scenario. Before deciding to use the reserved instance, it is imperative to have insight into your compute workloads. Tools like Azure Advisor and AWS Cost Explorer can help with this by providing recommendations based on your usage history, but ultimately only you can determine if those recommendations are accurate.
Data sprawl refers to the copious amounts of data generated and stored in myriad locations during the course of business. Data sprawl is by no means a problem specific to the cloud; however, the ease of resource provisioning and almost infinite scale of cloud compute and storage solutions can exacerbate the problem. The constantly growing amount of data and data sources bring additional risk and challenges associated with managing them. We’ve explored three ways to mitigate data sprawl:
1. understand and classify your data
The first step to managing data sprawl is understanding the various data stores, the groups responsible for them, and the relationships between them. This is often a moving target and requires a lot of effort and coordination across multiple business units. It’s worth the effort, because it will provide you with a high-level view of your data so you can map out the necessary data classifications, retention policies, and security policies to ensure you have control over your data.
2. secure data
Securing data starts with applying the appropriate access controls with tools such as role-based access control (RBAC), identity and access management (IAM) policies, and firewall policies. Encryption at rest is also critical to ensure your organization can meet its compliance requirements. All the major cloud providers offer built-in tools to make encryption at rest easy to achieve. Data security also means ensuring critical business data is not lost. Data stored in the cloud is not as easily lost due to hardware failure, but due diligence is still required. Data corruption, accidental deletion, or provider outages still present risks. Consider implementing safety features such as soft-delete or versioning for blob storage, cross-region replication or backups, and automatic backup tools.
3. leverage provider data governance tools
While the cloud can exacerbate data sprawl, it also provides tools that can help manage it. Familiarizing yourself with the toolsets available will help you establish proper data governance. Services like Amazon Macie can help you detect potentially sensitive data in S3 buckets. Amazon Kendra can take vast amounts of documents across a variety of sources and make them easily searchable. Governance tools like Azure Policy and AWS Config can help detect and remediate misconfigurations that could lead to data leaks. Azure Information Protection allows you to classify and protect documents both in the cloud and on-premises file shares.
While all these tools can be valuable assets in managing your data, they are no substitute for building a comprehensive understanding of your data sources, flows, and classifications. Machine learning-driven analysis cannot catch all sensitive data.
Cloud providers make it very easy and cost effective to get your data into their datacenters by making data ingress free. Data transfer out is another story. All the major cloud providers charge on a per-GB basis for data transferred out of their datacenters. Critically, this also usually applies to data moving between regions (inter-region) or availability zones within the same region (intra-region). These charges are often overlooked when evaluating cloud total cost of ownership (TCO). This can come as quite the shock if you have high data transfer needs, whether internally or externally.
Consider these two tactics to avoid unexpected egress charges:
1. plan and estimate
Bandwidth charges are never as simple as a single number. Before anything else, be sure you understand the pricing structure of data transfer charges by your cloud provider(s). Here are the key questions you should be sure you answer and understand:
How much does outbound internet traffic cost in the region(s) you are using?
Is outbound internet traffic priced uniformly or differentiated by the source?
How much does intra-region cost in the region(s) you are using?
Are there exclusions or discounts for traffic to/from other services within the same cloud provider? As an example, traffic from EC2 instances to S3 storage in the same region is free on AWS.
Once you understand the costs and pricing structure of network transfer, you’re in a good position to begin estimating your data transfer charges. This is not always an easy exercise and will be unique to your workloads. Some key points to keep in mind:
Understand the data flow of your applications and identify the points of highest traffic.
Leverage on-premises network monitoring tools where available.
Pilot your workload with production-like traffic and analyze the spending patterns (this is especially critical if you believe you will have high traffic patterns).
2. monitor and optimize
Cloud egress charges are not something you estimate once and then never think about again. New applications or changes to data flow patterns can cause costs to creep up. It’s vital to continue to monitor data traffic charges after you’re in the cloud so you can understand which resources and applications are consuming the most bandwidth and how you might optimize accordingly. The best way to do this is through resource tagging. Tag-based cost allocation is not retroactive and only applies to costs after the tags are applied, so building a useful tagging and enforcement strategy early is critical. Once you understand where your network charges lie, you can optimize accordingly, whether through collocating compute and storage, reducing payload sizes, or adjusting traffic patterns.
Cloud providers advertise the benefits of migrating to the cloud including lower costs, flexibility, and no upfront hardware costs. Gone are the days when you must purchase newer, faster hardware when migrating to a new data center. The development of cloud technology has allowed organizations to seamlessly spin up new VM(s) anywhere in the world, scale up CPU and memory, and store many terabytes or even petabytes of data if desired. While these capabilities can provide outstanding scalability options for organizational growth, without proper governance, inadvertent charges can quickly stack up. Take these two steps to prevent the frustration of runaway costs:
1. develop an overall cost governance strategy
The most effective way to help reduce runaway costs is to incorporate a cloud governance strategy. Establishing governing processes can allow organizations to manage their growth and avoid runaway costs. First and foremost, it is critical to follow a change management procedure to require a business justification for any new cloud resource requests. Once change management is in place, another key component of cloud governance is to develop a tagging strategy. Establishing and enforcing a tagging strategy allows better visibility into where costs are being incurred.
2. monitor costs
It is essential to provide IT staff members with visibility and guidance around associated costs for cloud resources. Providing team members with these resources can help encourage them to keep cost down. Develop monitoring and reporting on cloud costs with tools such as Azure Cost Management, AWS Cost Explorer, AWS Budgets, or third-party tools. Cloud cost management software help organizations oversee cloud expenditures by monitoring resource usage and providing alerts if costs exceed expected bounds.
avoid these pitfalls
Over-provisioning, data sprawl, egress charges, and runaway costs are all potentially costly traps when adopting cloud technologies. Whatever stage you are at in your cloud adoption journey, careful attention to these pitfalls is important for your success. With proper planning and a solid governance strategy, your organization can avoid shocking charges, or worse, serious security issues.
Do you need help planning your cloud adoption strategy or implementing cloud governance in your existing cloud environment? Credera has experience helping organizations across a variety of industries migrate, secure, and modernize their cloud workloads. Contact us to find out more.