- “Do I Know This Already?” Quiz
- Comparing Architecture Designs
- Disaster Recovery and Business Continuity
- Automating AWS Architecture
- Elastic Beanstalk
- Deployment Methodologies
- Exam Preparation Tasks
- Review All Key Topics
- Define Key Terms
- Q&A
Disaster Recovery and Business Continuity
How do you work around service failure at AWS? You must design for failure. Each customer must use the tools and services available at AWS to create an application environment with the goal of 100% availability. When failures occur at AWS, automated processes must be designed and in place to ensure proper failover with minimum to zero data loss. You can live with computer loss, although it might be painful; data loss is unacceptable—and it does not have to happen.
It is important to understand the published AWS uptime figures. For example, Amazon Relational Database Service (RDS) has been designed to fail a mere 52 minutes per year, but this does not mean you can schedule this potential downtime. As another example, just because Route 53, AWS’s DNS service, is designed for 100% uptime does not mean that Route 53 will not have issues. The published uptime figures are not guarantees; instead, they are what AWS strives for—and typically achieves.
When a cloud service fails, you are out of business during that timeframe. Failures are going to happen, and designing your AWS hosted applications for maximum uptime is the goal. You also must consider all the additional external services that allow you to connect to AWS: your telco, your ISP, and all the other moving bits. Considering all services in the equation, it’s difficult—and perhaps impossible—to avoid experiencing some downtime.
There are two generally agreed upon metrics that define disaster recovery:
Recovery point objective (RPO): RPO is the amount of data that you can acceptably lose over a defined length of time. If your RPO is defined as 5 minutes, for example, your disaster recovery needs to be designed to restore the data records to within 5 minutes of when the disaster first occurred. An RDS deployment performs automatic snapshots on a schedule, allowing you to roll back up to 35 days. Transaction logs are also backed up, allowing you to roll back to within 5 minutes of operations. So you might not lose more than 5 minutes of data, but it will generally take longer than 5 minutes to have your data restored when failure occurs. Keep in mind that with an RDS deployment across multiple availability zones, you may be back in operation without losing any data within minutes; how much time it takes depends on how long it takes for the DNS updates to reach the end user and point the user to the new primary database server.
Recovery time objective (RTO): RTP is the actual time it takes to restore a business process or an application back to its defined service level. If RTO is set to 1 hour, for example, the application should be up and functional within that 1-hour timeframe.
A customer operating in the AWS cloud needs to define acceptable values for RPO and RTO based on their needs and requirements and build that in to a service-level agreement.
Backup and Restoration
On-premises DR has traditionally involved regular backups to tape and storage of the tapes off-site, in a safe location. This approach works, but recovery takes time. AWS offers several services that can assist you in designing a backup and restoration process that is much more effective than the traditional DR design. Most, if not all, third-party backup vendors have built-in native connectors to directly write to S3 as the storage target. Backups can be uploaded and written to S3 storage using a public Internet connection or using a faster private Direct Connect or VPN connection.
Pilot Light Solution
When you design a pilot light disaster recovery configuration, your web, application, and primary database server are on premises and fully operational. Copies of the web and application servers are built on EC2 instances in the AWS cloud and are ready to go but are not turned. Your on-premises primary database server replicates updates and changes to the standby database server hosted in the AWS cloud, as shown in Figure 3-6. When planning what AWS region to use for your disaster recovery site, the compliance rules and regulations that your company follows dictate which regions can be used. In addition, you want the region/availability zones to be as close as possible to your physical corporate location.
FIGURE 3-6 Pilot Light Setup
When a disaster occurs, the web and application instances and any other required infrastructure, such as a load balancer at AWS are initiated. The standby database server at AWS needs to be set as the primary database server, and the on-premise DNS services have to be configured to redirect traffic to the AWS cloud as the disaster recovery site, as shown in Figure 3-7. The RTO to execute a pilot light deployment is certainly faster than the backup and restoration scenario; there is no data loss, but there is no access to the hosted application at AWS until configuration is complete. The key to having a successful pilot light solution is to have all of your initial preconfiguration work automated with CloudFormation templates, ensuring that your infrastructure is built and ready to go as fast as possible. We will discuss CloudFormation automation later in this chapter.
FIGURE 3-7 Pilot Light Response
Warm Standby Solution
A warm standby solution speeds up recovery time because all the components in the warm standby stack are already in active operation—hence the term warm—but at a smaller scale of operation. Your web, application, and database servers are all in operation, including the load balancer, as shown in Figure 3-8.
FIGURE 3-8 Warm Standby Setup
The key variable is that the warm standby application stack is not active for production traffic until disaster strikes. When a disaster occurs, you recover by increasing the capacity of your web and application servers by changing the size or number of EC2 instances and reconfiguring DNS records to reroute the traffic to the AWS site, as shown in Figure 3-9. Because all resources were already active, the recovery time is shorter than with a pilot light solution; however, a warm standby solution is more expensive than a pilot light option as more resources are running 24/7.
FIGURE 3-9 Warm Standby Response
An application that requires less downtime with minimal data loss could also be deployed by using a warm standby design across two AWS regions. The entire workload is deployed to both AWS regions, using a separate application stack for each region.
Because data replication occurs across multiple AWS regions, the data will eventually be consistent, but the time required to replicate to both locations could be substantial. By using a read-local/write-global strategy, you could define one region as the primary for all database writes. Data would be replicated for reads to the other AWS region. If the primary database region then fails, failover to the passive site occurs. Obviously, this design has plenty of moving parts to consider and manage. This design could also take advantage of multiple availability zones within a single AWS region instead of using two separate regions.
Hot Site Solution
If you need RPO and RTO to be very low, you might want to consider deploying a hot site solution with active-active components running both on premises and in the AWS cloud. The secret sauce of the hot site is Route 53, the AWS DNS service. The database is mirrored and synchronously replicated, and the web and application servers are load balanced and in operation, as shown in Figure 3-10. The application servers are synchronized to the live data located on premises.
FIGURE 3-10 Hot Site Setup
Both application tiers are already in full operation; if there’s an issue with one of the application stacks, traffic gets redirected automatically, as shown in Figure 3-11. With AWS, you can use Auto Scaling to scale resources to meet the capacity requirements if the on-premises resources fail. A hot site solution is architected for disaster recovery events.
FIGURE 3-11 Hot Site Response
Multi-Region Active-Active Application Deployment
An active-active deployment of an application across multiple AWS regions adds a level of redundancy and availability, but it has an increased cost as each region hosts a complete application stack.
For the AWS Certified Solutions Architect - Associate (SAA-C02) exam, it is important to know that for an automated multi-region deployment, AWS offers Amazon Aurora, a relational database solution for maintaining the consistency of database records across the two AWS regions. Aurora, which is a relational database that is PostgreSQL and MySQL compatible, can function as a single global database operating across multiple AWS regions. An Aurora global database has one primary region and up to five read-only secondary regions. Cross-region replication latency with Aurora is typically around 1 second. Aurora allows you to create up to 16 additional database instances in each AWS region; these instances all remain up to date because Aurora storage is a shared virtual SAN clustered shared storage solution.
With Aurora, if your primary region faces a disaster, one of the secondary regions can be promoted to take over the reading and writing responsibilities, as shown in Figure 3-12. Aurora cluster recovery can be accomplished in less than 1 minute. Applications that use this type of database design would have an effective RPO of 1 second and an RTO of less than 1 minute. Web and application servers in both AWS regions are placed behind elastic load-balancing services (ELB) at each tier level and also use Auto Scaling to automatically scale each application stack, when required, based on changes in application demand. Keep in mind that Auto Scaling can function across multiple availability zones; Auto Scaling can mitigate an availability zone failure by adding the required compute resources in a single availability zone.
FIGURE 3-12 Aurora DB Cluster with Multiple Writes
The AWS Certified Solutions Architect - Associate (SAA-C02) exam is likely to ask you to consider best practices based on various scenarios. There are many potential solutions to consider in the real world, and Amazon wants to ensure that you know about a variety of DR solutions.
The AWS Service-Level Agreement (SLA)
Many technical people over the years have described cloud service-level agreements (SLAs) as being inadequate—especially when they compare cloud SLAs with on-premises SLAs. Imagine that you were a cloud provider with multiple data centers and thousands of customers. How would you design an SLA? You would probably tell your customers something like “We do the best job we can, but computers do fail, as we know.” If you think about the difficulty of what to offer a customer when hardware and software failures occur, you will probably come up with the same solution that all cloud providers have arrived at. For example, AWS uses the following language in an SLA:
AWS will use commercially reasonable efforts to make the AWS service in question available with a monthly uptime percentage of at least this defined percentage for each AWS region. In the event that the listed AWS service does not meet the listed service commitment, you will be eligible to receive a service credit.
With AWS, many core services have separate SLAs. Certainly, the building blocks of any application—compute, storage, CDN, and DNS services—have defined service levels (see Table 3-4). However, an SLA does not really matter as much as how you design your application to get around failures when they occur.
Table 3-4 Service-Levels at AWS
AWS Service |
General Service Commitment |
---|---|
EC2 instances |
99.99% |
EBS volumes |
99.99% |
RDS |
99.95% |
S3 buckets |
99.9% |
Route 53 |
100 % |
CloudFront |
99.9% |
Aurora |
99.99% |
DynamoDB |
99.99% |
Elastic Load Balancing |
99.99% |
As you think about AWS cloud service SLAs, keep in mind that each service is going to fail, and you’re not going to have any warning of these failures. This is about the only guarantee you have when hosting applications in the public cloud: The underlying cloud services are going to fail unexpectedly. AWS services are typically stable for months, but failures do happen unexpectedly.
Most of the failures that occur in the cloud are compute failures. An instance that is powering an application server, a web server, a database server, or a caching server fails. What happens to your data? Your data in the cloud is replicated, at the very least, within the AZ where your instances are running. (Ideally, your data records reside on multiple EBS volumes.) This does not mean you can’t lose data in the cloud; if you never back up your data, you will probably lose it. And because customers are solely in charge of their own data, 100% data retention is certainly job one.