In this section, you will learn to describe the advantages of cloud computing architecture. I will describe the benefits of trading capital expense for variable expenses, cloud economics, capacity planning, optimized agility, improved focus, and leveraging global resources in comparison to the traditional architecture, and will review and define HA and BC.
Cloud computing offers many advantages in comparison to traditional on-premises data centers. Let us review some of the key advantages.
Trade capital expense for variable expense
Organizations generally consider moving their workloads to the cloud because of the expense advantages. Instead of having to invest in data centers and servers before organizations know how they are going to use them, only pay when you consume cloud computing resources, and only pay for how much the organization consumes. This expense advantage allows any industry to rapidly get up and running while only paying for what is being utilized.
Benefit from massive economies of scale
Using cloud computing, organizations can achieve a lower variable cost than they can get on their own. Because usage from tens of thousands of customers is collected and combined in the cloud, cloud computing providers such as Amazon, Microsoft, and Google can achieve higher economies of scale, which translates into lower subscription prices.
And cloud computing providers such as Amazon, Microsoft, and Google invest in low-end commodity devices optimized for large-scale clouds instead of purchasing high-end devices. The volume of subscription purchases coupled with lower-cost commodity hardware grants cloud computing providers the ability to lower prices for new customers.
Stop guessing about capacity
As aforementioned, enterprise organizations only pay when utilizing cloud computing resources. Organizations access as much or as little as needed, and scale up and down, in and out as required on-demand.
Capacity planning is not only arduous but tedious and error-prone, particularly if you do not know what the customer’s response will be. Customers’ demands fluctuate dynamically, and the capability to scale becomes critical. Cloud computing engineers can demand more capacity during real-time shifts and spikes in customer demand, reduce costs using commodity compute, storage, and networking resources pooled by the cloud computing provider, and can be provisioned at a moment’s notice. For general concerns such as whether your LOB application needs more compute resources to meet increasing customer demands, hosting your workload in the cloud can help keep your customers satisfied. Does a decline in business mean that you don’t need all that capacity your cloud computing service is providing for your LOB applications? Cloud computing engineers can scale down compute capacity to control costs, offering a huge advantage over static on-premises data center solutions.
Increase speed and agility
On-premises data centers can take generally several weeks to months to provision a server. With cloud computing ecosystems, organizations can provision tens of thousands of resources in minutes, and the ability to rapidly scale your workloads both horizontally and vertically allows you to address SLAs that are in constant flux. Developing new applications in the cloud can significantly decrease time to market (TTM), which is an improvement over traditional monolithic development for several reasons. You do not have to deploy, configure, and maintain the underlying hardware for the compute, storage, and networking on which your applications will run. Instead, use the infrastructure resources accessible to you by your cloud computing provider.
Another reason why cloud computing-developed applications are faster to deploy has to do with how modern applications are developed. In an enterprise setting, developers create and test their applications in a test environment that simulates the final production environment. For example, an application might be developed and tested on a single-instance VM, also known as the dev environment, for eventual deployment onto two VM instances clustered across different Availability Zones (AZs) for HA and fault tolerance, which is common for production environments. Inconsistencies between your development and production environments can impact the development sprint cycle for business applications because problems might be missed in testing and only become apparent when the applications are deployed to production, which consequently necessitates further testing and development until the applications are behaving as intended. But with cloud computing, organizations can perform development and testing in the same kind of environment that their applications will be deployed upon. This allows you to quickly create resources and experiment iteratively. For start-ups, cloud computing grants them to start at a very low cost and scale rapidly as they gain customers. Start-ups would not encounter large upfront capital investment to create a new VM. This empowers any enterprise with the flexibility to rapidly set up development and test configurations. These can be programmed dynamically, giving you the ability to instantiate a development or test environment, do the testing, and tear it back down. This methodology keeps the cost very low, and maintenance is almost nonexistent.
Focus on what matters
Cloud computing lets organizations focus on their customers, rather than on expanding their data centers’ resources, which includes investing in infrastructure, racking, stacking, and powering servers.
Cloud computing providers have already done the heavy lifting for you. For most enterprises, their most inadequate resource is their software engineers, now referred to as developers. Development teams have various priorities and tasks that need to be successfully completed. It is an advantage to focus those resources on projects that move the organization’s campaign forward, rather than planning, procuring, preparing, and implementing an underlying infrastructure.
This makes economic sense for organizations when it comes to hardware acquisition costs because the cloud computing provider provides the core hardware resources. Traditionally, enterprises have often purchased and deployed large, scaled SANs from third-party vendors to meet business requirements. By utilizing storage resources from a cloud computing provider instead, enterprise organizations can significantly decrease overall storage procurement and long-term maintenance costs.
Cloud computing providers manage the data center, which means you do not have to manage your own IT infrastructure. Cloud computing enables you to access computing services, regardless of your location and the equipment that you use to access those services.
Go global in minutes
Cloud computing providers such as Amazon, Microsoft, and Google are constantly expanding their global presence to help all customers of varying sizes achieve lower latency and greater throughput and to ensure that an enterprise’s most important asset—that is, data—resides only in the region they specify. As organizations and customers continue to grow their businesses, cloud computing providers such as Amazon, Microsoft, and Google will continue to provide the infrastructure that meets any organization’s global business requirements.
Only the largest global enterprises can deploy data centers around the world. So, using Amazon, Microsoft, and Google entitles enterprises of any size to the capability to host an application or workload from any region to reduce latency to end users while avoiding the capital expenses, long-term commitments, and scaling challenges associated with maintaining and operating a global infrastructure.
In a later section, I will divulge each cloud provider’s mammoth global infrastructure regions and zones in detail. But before I do, here’s a brief insight into HA and DR, which are addressed by utilizing cloud computing global infrastructure.
An overview of HA
IT systems are considered critical business tools in most organizations. Outages of even a few hours reflect poorly upon the IT department and can result in lost sales or loss of business reputation. HA ensures that IT systems can survive the failure of a single server or even multiple servers.
Availability refers to the level of service that applications, services, or systems provide, and is expressed as the percentage of time that a service or system is available. Highly available architectures have minimal downtime, whether planned or unplanned, and are available more than 99% of the time, depending on the needs and the budget of the organization.
Here are some common target availability considerations:
- Cloud data center infrastructure
- Server hardware
- Storage
- Network infrastructure
- Internet
- Application services
Note
This is not an exhaustive list.
Cloud computing providers support the capability of any organization to design a highly available architecture. Cloud computing data centers are organized into AZs. Each AZ comprises one or more data centers, with some AZs having three to six data centers.
Each AZ is designed as an independent failure zone. This means that AZs are physically separated within a region and are in a specific flood zone by region. In addition to having separate uninterruptable power supplies and onsite backup generators, they are each connected to different electrical grids from independent utilities to further reduce SPOFs. AZs are all redundantly connected to multiple transit providers.
Enterprise organizations are responsible for selecting AZs where their systems reside. Some services can span multiple AZs. Every organization should design its systems to survive temporary or prolonged failure of an AZ if some disaster occurs. Utilizing distributed computing methods, organizations can distribute applications across multiple AZs, allowing them to remain resilient in most failure scenarios, including natural disasters or typical system failures.
An overview of DR
DR planning is an essential requirement for fulfilling SLAs. These agreements define when a service needs to be available, and how quickly it must be recovered if it fails. To ensure that organizations meet SLA requirements, site resiliency becomes a business requirement. Site resiliency is the ability of one or more systems and services to survive a site failure and to continue functioning using an alternate data center.
One of the advantages of cloud computing is the capability to implement multi-region with multiple AZs for DR and HA. The alternate data center is a site that can be in another region within a separate AZ dedicated only to DR. For example, the alternate data center could be another location in the same geographical region, such as the Johnson & Johnson primary data center, which is located in Virginia but has its secondary data center in Ohio; it’s in use but has sufficient capacity to handle the Ohio facility’s services in the event of an unplanned failure.
In summary, AZs are equivalent to a cluster of data centers. Amazon, Microsoft, and Google isolate their data centers using AZs so that they are not easily affected by natural disasters at the same time. AZs are a distinct group of data centers, whereas a region is made of multiple AZs to support the capability of spreading compute resources across multiple power providers.
Tip
Cloud computing providers recommend provisioning your resources across multiple AZs. If you implement multiple VM instances, you can spread them across more than one AZ and get added redundancy. If a single AZ has a problem, all assets in your second AZ will be unaffected. All recommendations are derived from online artifacts and can be corroborated by reviewing each cloud computing provider’s well-architected framework document. The well-architected frameworks define real-world best practices that support cloud adoption and ongoing governance and management for any workload.