Contact

Technology

Jun 28, 2022

Making Sense of Sustainability: How Green Is Your Cloud?

John Davies

John Davies

Making Sense of Sustainability: How Green Is Your Cloud?

We’re exploring how leveraging the benefits of the cloud can help organizations work toward sustainability goals.

Headline Takeaways:

The “digital revolution” has rewritten the IT landscape via Web 1.0 and, more significantly, Web 2.0, leading to “data becoming the new oil of the digital economy.”

Hyperscale cloud data centers are significantly more energy efficient than conventional server rooms, on-premises data centers, and co-location centers. The “pay as you go” consumption model incentivizes organizations to right-size workloads and drive further efficiencies via autoscaling and powering off systems when not in use.

The move to exit conventional server rooms and data centers by migrating workloads to the cloud is having a positive effect on the environment and the organization’s bottom line.

Modernizing applications from infrastructure as a service (IaaS) to platform and software as a service (Paas and SaaS) offerings further increases these positive effects. Advancements in PaaS serverless offerings help organizations manage IP v4 addresses, which for large multinationals can be a scarce commodity; in addition to driving down costs and, by extension, consumption and carbon footprint.

Listen next: Podcast: Should cloud be a part of your green strategy?

How Has the “Digital Revolution” Led to a Sea Change in the Creation and Consumption of Data?

The "digital revolution" has driven exponential growth in the demand for communications, compute, and storage. Web 2.0 has created an unprecedented uptick in consumption, with the advent of always connected smart phones featuring social media and social networking driving the big data explosion. This demand will only increase further as more devices (wearables, self-driving cars, industrial connected devices, smart home appliances, etc.) are connected to the Internet of Things (IoT) for telemetry, data harvesting, and ‘over the air’ updates.

In order to store, process, and derive actionable insights from the vast amounts of data generated, businesses require data analytics and business intelligence (BI) platforms, data scientists, and data engineers.

The overwhelming majority of our clients, including Fortune 500 and FTSE 100 & 250 companies, are creating these data analytics and BI platforms in the cloud.

When the cloud was in its infancy, there were concerns that energy consumption and fossil fuel pollution would run unchecked due to the forecast growth in demand. These ideas were based on the architecture and implementation of data centers at that time. Successful campaigning by Greenpeace brought this into the consumer’s focus and resulted in increased support for the Open Compute Project; a collaborative community driving hardware efficiencies to support the growing demands. Microsoft and Google joined in 2014 and 2015 respectively.

In order to host the massive scale compute and storage required to support Web 2.0, a new breed of data center was required. One that could support exponentially higher throughput, compute and storage, and the lower latency requirements of Web 2.0.

How Have Data Centers Evolved to Cope With Exponential Demand While Being Sustainable?

Hyperscale data center facilities are significantly larger than traditional data centers. As of 2021, there were over 600 hyperscale facilities worldwide with Microsoft, Amazon, and Google accounting for more than half of them.

The major cloud providers have published roadmaps for 100% renewable energy supply usage, further reducing their carbon footprints by sourcing clean energy from wind, solar, and hydroelectric power.

Data center (DC) efficiency is measured in power usage effectiveness (PUE). This is the total amount of electricity consumed by a DC in relation to the total amount of electricity delivered to its equipment. PUE is always equal to or greater than 1; the closer to 1, the more efficient the DC.

The average traditional DC PUE in 2007 was 2.50; advances in modern hyperscale DCs reduced this to a new average low of 1.57 in 2021. This has been achieved through the redesign of DC architecture and technological advancements in cooling and water stewardship. The operating temperature of DCs has also increased from 13 C/55 F in the 1980s to a temperate of 26 C/80 F in the 2020s—due in part to advancements in CPU thermal efficiency and chip density.

While PUE has remained fairly static over the past five years, different approaches are being employed to gain further efficiencies. An example of this was Google using machine learning in data center operations from 2014, resulting in a 40% reduction in cooling energy consumption. Google has continued innovating to reach a record low of 1.10 PUE across its DC fleet in 2021, and is targeting 1.06. Over the same period, the amount of computing performed in data centers increased by 550%.

A 550% increase in computing usage with a 227% reduction in energy consumption, allied with an increase in renewable energy usage, shows that cloud can indeed have a green lining!

How Can Organizations Take Advantage of This “Green Lining”?

Moving to the cloud is a mindset shift from operating on-premises workloads. Organizations have historically over-specified on-premises hardware to either:

  • Use the budget or lose it.

  • Cope for growth in demand while “sweating the assets” over a three- or five-year period.

Organizations that embrace the cloud can remove hardware maintenance and replacement costs in addition to productivity costs incurred from over-sweating assets. There is no longer a need to purchase all hardware in one go and carve it up for programs and projects, allowing for agile programs that benefit from “just in time” deployments.

How Can an Organization With On-Premises Workloads Start On This Journey?

By embracing a 'cloud-first’ strategy and rehosting existing workloads into the cloud, organizations can reduce both the carbon footprint of these applications and the associated hosting and support costs.

The move from capital expenditure to operating expenditure and the advances in hyperscale DCs allow organizations to adopt an agile approach to what was previously a long-term investment with lengthy lead and depreciation times.

What Other Benefits Can an Organization See When Starting Their Cloud Journey?

The “pay as you go” model offered by the cloud allows for resources to be billed for the amount of time it is used—often in minutes. This consumption-based model allows organizations to flex up and down when required without investing in new hardware in their own DCs and removes the lengthy lead times. Cost and climate-conscious organizations can use this model to their advantage:

  1. Right-sizing compute offerings allows the organization to hit the sweet spot based on workload demand, optimizing the cost and power requirements.

  2. Powering off machines when not in use pauses the compute costs and, by extension, the resources used in the DC to run the workload.

Both of these approaches allow an organization to save money and energy consumption as well as achieve a faster time to market.

How Can Organizations Adapt as Their Cloud Adoption Matures?

As organizations mature their usage of the cloud through application modernization, we see the use of infrastructure as a service (IaaS) being replaced with PaaS and SaaS consumption. Refactoring workloads using PaaS provides the ability to be more economical by using autoscaling and serverless offerings as well as removing some of the administrative burden that accompanies IaaS, such as application upgrades and patching cycles.

Serverless PaaS offerings allow for organizations to deploy workloads without the need for infrastructure or software configuration. The cloud provider manages the serverless services availability, scaling, and bandwidth. This allows highly scalable microservices or event-driven architectures to be created with minimal administrative overhead, while also lowering consumption and carbon footprint.

AWSGoogle, and Microsoft have released tools allowing customers to calculate the carbon footprint of running their services in the public cloud, enabling organizations to track their cloud usage against sustainability goals and empowering them to make climate-conscious hosting decisions.

Key Takeaways

By publicly highlighting climate concerns four years after Facebook went live, the industry reacted by embracing efficient design across the board. Over the following decade, hyperscale DC efficiency has significantly improved and is edging ever closer to 1 PUE, while increasing compute output and the use of renewable energy year on year. Organizations who are still servicing a sizeable fleet of servers in on premises DCs and server rooms, could see sizeable investment and running cost savings; as well as significantly reduce their carbon footprint by migrating workloads, exiting inefficient hosting locations, and embracing a cloud-native strategy.

Need a Guide for Your Cloud Transformation Journey?

Credera is passionate about helping organizations foster cloud enablement that drives successful cloud adoption and valuable business outcomes. Our unique expertise in corporate strategy, innovation, and application development enables us to bring a holistic approach to your cloud adoption and transformation journey.

Explore Credera’s Cloud Transformation Framework to learn more, or reach out to us at findoutmore@credera.com if you’re interested in starting a conversation.

Conversation Icon

Contact Us

Ready to achieve your vision? We're here to help.

We'd love to start a conversation. Fill out the form and we'll connect you with the right person.

Searching for a new career?

View job openings