Google News
logo
AWS Interview Questions
Amazon Web Services(AWS) provides cloud computing solutions and APIs to firms and individuals around the globe. It is a service which is provided by the Amazon that uses distributed IT infrastructure to provide different IT resources on demand. It provides different services such as an infrastructure as a service, platform as a service, and software as a service.
The following are the main components of AWS are :
 
Simple Storage Service : S3 is a service of aws that stores the files. It is object-based storage, i.e., you can store the images, word files, pdf files, etc. The size of the file that can be stored in S3 is from 0 Bytes to 5 TB. It is an unlimited storage medium, i.e., you can store the data as much you want. S3 contains a bucket which stores the files. A bucket is like a folder that stores the files. It is a universal namespace, i.e., name must be unique globally. Each bucket must have a unique name to generate the unique DNS address.
 
Elastic Compute Cloud : Elastic Compute Cloud is a web service that provides resizable compute capacity in the cloud. You can scale the compute capacity up and down as per the computing requirement changes. It changes the economics of computing by allowing you to pay only for the resources that you actually use.
 
Elastic Block Store : It provides a persistent block storage volume for use with EC2 instances in aws cloud. EBS volume is automatically replicated within its availability zone to prevent the component failure. It offers high durability, availability, and low-latency performance required to run your workloads.
 
CloudWatch : It is a service which is used to monitor all the AWS resources and applications that you run in real time. It collects and tracks the metrics that measure your resources and applications.
 
Identity Access Management : It is a service of aws used to manage users and their level of access to the aws management console. It is used to set users, permissions, and roles. It allows you to grant permission to the different parts of the aws platform. 
 
Simple Email Service : Amazon Simple Email Service is a cloud-based email sending service that helps digital marketers and application developers to send marketing, notification, and transactional emails. This service is very reliable and cost-effective for the businesses of all the sizes that want to keep in touch with the customers.
 
Route53 : It is a highly available and scalable DNS (Domain Name Service) service. It provides a reliable and cost-effective way for the developers and businesses to route end users to internet applications by translating domain names into numeric IP addresses.
Amazon S3 (Simple Storage Service) is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web.
Amazon Simple Notification Service (Amazon SNS) is a push notification service used in sending individual messages to a big group of mobile or email subscriber systems including Amazon SQS queues, AWS Lambda functions, and HTTPS endpoints. It is both application-to-application (A2A) and application-to-person (A2P) communication.
AMI stands for Amazon Machine Image.  It’s a template that provides the information (an operating system, an application server, and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud.  You can launch instances from as many different AMIs as you need.
From a single AMI, you can launch multiple types of instances.  An instance type defines the hardware of the host computer used for your instance. Each instance type provides different computer and memory capabilities.  Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.
An AMI includes the following things
 
* A template for the root volume for the instance
* Launch permissions decide which AWS accounts can avail the AMI to launch instances
* A block device mapping that determines the volumes to attach to the instance when it is launched
The top product categories of AWS are :
 
* Compute
* Storage
* Database
* Networking and Content Delivery
* Analytics
* Machine Learning
* Security
* Identity
* Compliance
It is a centralized data repository to store all your structured and unstructured data at any volume. The core aspect of Data lake is that you can apply various analytical tools to data, derive analytics, and uncover useful insights without structuring the data. Also, Data lake stores data coming from various sources such as business applications, mobile applications, and IoT devices.
 
Data Warehouse Data Lake
Data is relational from transactional systems and operational databases. Data is both non-relational and relational from various sources such as IoT devices, mobile apps, websites, and social media.
Provides fastest query results at high cost of storage. Provides faster query results at low storage cost.
Used by Business analysts. Used by Data scientists, Data developers, and Business analysts.
Helps in Batch reporting, BI and visualizations Helps to perform various analytics such as Machine Learning, Predictive analytics, data discovery and profiling
Snowball is a transporting option available in AWS to transport the data in and out of AWS. Snowball helps to transfer immense data at low networking cost.
Key Pairs are used to connect to the virtual machines. The secure login credentials used to connect to virtual machines are known as Key pairs.
There are mainly three kinds of cloud service types that AWS products offer. These are:
 
Computing : Auto-scaling, EC2, Lightsat, Elastic Beanstalk, and Lambda

Storage :
S3, Elastic File System, Elastic Block Storage, and Glacier

Networking :
VPC, Route53, and Amazon CloudFront
Auto Scaling is a feature in aws that automatically scales the capacity to maintain steady and predictable performance. While using auto scaling, you can scale multiple resources across multiple services in minutes. If you are already using Amazon EC2 Auto- scaling, then you can combine Amazon EC2 Auto-Scaling with the Auto-Scaling to scale additional resources for other AWS services.
 
Benefits of Auto Scaling
 
Setup Scaling Quickly : It sets the target utilization levels of multiple resources in a single interface. You can see the average utilization level of multiple resources in the same console, i.e., you do not have to move to the different console.

Make Smart Scaling Decisions : It makes the scaling plans that automate how different resources respond to the changes. It optimizes the availability and cost. It automatically creates the scaling policies and sets the targets based on your preference. It also monitors your application and automatically adds or removes the capacity based on the requirements.

Automatically maintain performance : Auto Scaling automatically optimize the application performance and availability even when the workloads are unpredictable. It continuously monitors your application to maintain the desired performance level. When demand rises, then Auto Scaling automatically scales the resources.
Regions : A region is a geographical area which consists of 2 or more availability zones. A region is a collection of data centers which are completely isolated from other regions.
 
Availability zones : An Availability zone is a data center that can be somewhere in the country or city. Data center can have multiple servers, switches, firewalls, load balancing. The things through which you can interact with the cloud reside inside the Data center.
Geo-targeting in the CloudFront supports the creation of customized content for a  target audience as suggested by demand and the needs of a specific geographical area. This helps businesses showcase their personalized content to the target audience in different geographic locations without changing its URL.
There are four steps involved in CloudFront. These are :
 
Step 1 : Creating a CloudFormation template in YAML or JSON format

Step 2 :
Saving the code in an S3 bucket so that it serves the repository for the code

Step 3 :
Using the AWS CloudFormation to call the bucket and thereby creating a new stack on the template

Step 4 :
CloudFormation reads the file and thus understands the services required that are called along with their order details, relationships with services and associated provisions
The main differences between ‘horizontal’ and ‘vertical’ scales are :

 

Horizontal Scale

Vertical Scale

Provides new resources along with new hardware devices to support the infrastructure

You would need to increase power resources by upgrading the current machine

Used in distributed systems

Used in virtualization

Resilient to system failure 

Single point of failure

Utilizes network calls

Interprocess communication

Increases the capacity of existing hardware or software by adding additional resources

Connects multiple system entities, both hardware, and software such that they work as a single logical unit

Difficult to implement 

Easy to implement

Geo-Targeting is a concept where businesses can show personalized content to their audience based on their geographic location without changing the URL. This helps you create customized content for the audience of a specific geographical area, keeping their needs in the forefront.
You can upgrade or downgrade a system with near-zero downtime using the following steps of migration:
 
* Open EC2 console
* Choose Operating System AMI
* Launch an instance with the new instance type
* Install all the updates
* Install applications
* Test the instance to see if it’s working
* If working, deploy the new instance and replace the older instance
* Once it’s deployed, you can upgrade or downgrade the system with near-zero downtime.
The between EC2 and Amazon S3 is that
 
EC2 S3
  • It is a cloud web service used for hosting your application
  • It is a data storage system where any amount of data can be stored
  • It is like a huge computer machine which can run either Linux or Windows and can handle applications like PHP, Python, Apache, or any databases
  • It has a REST interface and uses secure HMAC-SHA1 authentication keys
This is also among the most popular AWS interview questions asked in an AWS interview.
 
Following are the advantages of AWS’s Disaster Recovery (DR) solution :
 
* AWS offers a cost-effective backup, storage, and DR solution, helping the companies to reduce their capital expenses
* Fast setup time and greater productivity gains
* AWS helps companies to scale up even during seasonal fluctuations
* It seamlessly replicates on-premises data to the cloud
* Ensures fast retrieval of files
There are three types of load balancers in EC2 : 
 
Application Load Balancer : These balancers are designed to make routing decisions at the application layer. 

Network Load Balancer :
Network load balancer handles millions of requests per second and helps in making routing decisions at the transport layer.    

Classic Load Balancer :
Classic Load Balancer is mainly used for applications built within the EC2-Classic network. It offers basic load balancing at varying Amazon EC2 instances.
DynamoDB is a NoSQL database. It is very flexible and performs quite reliably – and can be integrated with AWS! It offers fast and predictable performance with seamless scalability. With the help of DynamoDB, you do not need to worry about hardware provisioning, setup, and configuration, replication, software patching, or cluster scaling.
AWS CloudFormation is an Amazon service, dedicated to solving the need to standardize and replicate the architectures to facilitate their execution and optimize resources and costs in the delivery of applications, or compliance with the requirements of the organization. CloudFormation allows creating a proprietary library of instance templates or architectures capable of being delivered at any time and in an organized manner, through programming.
It is one of the most popular AWS interview questions. There are many advantages of AWS CloudFormation including the following.
 
* Reduces infrastructure deployment time
* Increases confidence in deployments
* Replicates complex environments, for example, have complex environments for development, pre-production, and production, that are the same, or almost the same, simply by scaling up resources
* Reuses the definitions between different products
* Reduces environment repair time
Elastic Beanstalk is an orchestration service by AWS, used in various AWS applications such as EC2, S3, Simple Notification Service, CloudWatch, autoscaling, and Elastic Load Balancers. It is the fastest and simplest way to deploy your application on AWS using either AWS Management Console, a Git repository, or an integrated development environment (IDE).
T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by workload.
This AWS service automatically adds or removes EC2 instances as per the changing demands in workloads. Also, this service detects the unhealthy EC2 instances in the cloud infrastructure and replaces them with new instances, consequently. In this service, scaling is achieved in dynamic scaling and Predictive scaling. They can be used separately as well as together to manage the workloads.
Amazon EC2 auto-scaling service continuously monitors the health of Amazon EC2 instances and other applications. When EC2 auto-scaling identifies unhealthy instances, it automatically replaces the unhealthy EC2 instances with new EC2 instances. Also, this service ensures the seamless running of applications and balances EC2 instances across the zones in the cloud.
Amazon VPC is known as Amazon Virtual Private Cloud (VPC), allowing you to control your virtual private cloud. Using this service, you can design your VPC right from resource placement and connectivity to security. And you can add Amazon EC2 instances and Amazon Relational Database Service (RDS) instances according to your needs. Also, you can define the communication between other VPCs, regions, and availability zones in the cloud.
Amazon Simple Queuing Service (SQS) is a fully managed message queuing service. Using this service, you can send, receive and store any quantity of messages between the applications. This service helps to reduce complexity and eliminate administrative overhead. In addition to that, it provides high protection to messages through the encryption method and delivers them to destinations without losing any message.
There are two types of queues known :
 
Standard Queues : It is a default queue type. It provides an unlimited number of transactions per second and at least one message delivery option. 
 
FIFO Queues : FIFO queues are designed to ensure that the order of messages is received and sent is strictly preserved as in the exact order that they sent.
The Storage Classes that are available in the Amazon S3 are the following :
 
* Amazon S3 Glacier Instant Retrieval storage class
* Amazon S3 Glacier Flexible Retrieval (Formerly S3 Glacier) storage class
* Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive)
* S3 Outposts storage class
* Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
* Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
* Amazon S3 Standard (S3 Standard)
* Amazon S3 Reduced Redundancy Storage
* Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)
Amazon Redshift helps analyze data stored in data warehouses, databases, and data lakes using Machine Learning (ML) and AWS-designed hardware. It uses SQL to analyze structured and semi-structured data to yield the best performance from the analysis. This service automatically creates, trains, and deploys Machine Learning models to create predictive insights.
Both Spot Instance and On-demand Instance are models for pricing.
 
Spot Instance On-demand Instance
With Spot Instance, customers can purchase compute capacity with no upfront commitment at all. With On-demand Instance, users can launch instances at any time based on the demand.
Spot Instances are spare Amazon instances that you can bid for. On-demand Instances are suitable for the high-availability needs of applications.
When the bidding price exceeds the spot price, the instance is automatically launched, and the spot price fluctuates based on supply and demand for instances. On-demand Instances are launched by users only with the pay-as-you-go model.
When the bidding price is less than the spot price, the instance is immediately taken away by Amazon. On-demand Instances will remain persistent without any automatic termination from Amazon.
Spot Instances are charged on an hourly basis. On-demand Instances are charged on a per-second basis
* Cross Region Replication is a feature that replicates the data from one bucket to another bucket which could be in a different region.

* It provides asynchronous copying of objects across buckets. Suppose X is a source bucket and Y is a destination bucket. If X wants to copy its objects to Y bucket, then the objects are not copied immediately.

Some points to be remembered for Cross Region Replication
 
Create two buckets : Create two buckets within AWS Management Console, where one bucket is a source bucket, and other is a destination bucket.

Enable versioning : Cross Region Replication can be implemented only when the versioning of both the buckets is enabled.

Amazon S3 encrypts the data in transit across AWS regions using SSL : It also provides security when data traverse across the different regions.

Already uploaded objects will not be replicated : If any kind of data already exists in the bucket, then that data will not be replicated when you perform the cross region replication.


Use cases of Cross Region Replication
 
Compliance Requirements : By default, Amazon S3 stores the data across different geographical regions or availability zone to have the availability of data. Sometimes there could be compliance requirements that you want to store the data in some specific region. Cross Region Replication allows you to replicate the data at some specific region to satisfy the requirements.

Minimize Latency : Suppose your customers are in two geographical regions. To minimize latency, you need to maintain the copies of data in AWS region that are geographically closer to your users.

Maintain object copies under different ownership: Regardless of who owns the source bucket, you can tell to Amazon S3 to change the ownership to AWS account user that owns the destination bucket. This is referred to as an owner override option.
Elastic Block Store is a service that provides a persistent block storage volume for use with EC2 instances in aws cloud. EBS volume is automatically replicated within its availability zone to prevent from the component failure. It offers high durability, availability, and low-latency performance required to run your workloads.
S3 bucket can be secured in two ways:
 
ACL (Access Control List) : ACL is used to manage the access of resources to buckets and objects. An object of each bucket is associated with ACL. It defines which AWS accounts have granted access and the type of access. When a user sends the request for a resource, then its corresponding ACL will be checked to verify whether the user has granted access to the resource or not.
When you create a bucket, then Amazon S3 creates a default ACL which provides a full control over the AWS resources.

Bucket Policies : Bucket policies are only applied to S3 bucket. Bucket policies define what actions are allowed or denied. Bucket policies are attached to the bucket not to an S3 object but the permissions define in the bucket policy are applied to all the objects in S3 bucket.

The following are the main elements of Bucket policy :
 
Sid : A Sid determines what the policy will do. For example, if an action that needs to be performed is adding a new user to an Access Control List (ACL), then the Sid would be AddCannedAcl. If the policy is defined to evaluate IP addresses, then the Sid would be IPAllow.

Effect : An effect defines an action after applying the policy. The action could be either to allow an action or to deny an action.

Principal : A Principal is a string that determines to whom the policy is applied. If we set the principal string as '*', then the policy is applied to everyone, but it is also possible that you can specify individual AWS account.

Action : An Action is what happens when the policy is applied. For example, s3:Getobject is an action that allows to read object data.

Resource : The Resource is a S3 bucket to which the statement is applied. You cannot enter a simply bucket name, you need to specify the bucket name in a specific format.
For example, the bucket name is freetimelearn-bucket, then the resource would be written as "arn:aws:s3""freetimelearn-bucket/*".
Stopping : You can stop an EC2 instance and stopping an instance means shutting down the instance. Its corresponding EBS volume is still attached to an EC2 instance, so you can restart the instance as well.
 
Terminating : You can also terminate the EC2 instance and terminating an instance means you are removing the instance from your AWS account. When you terminate an instance, then its corresponding EBS is also removed. Due to this reason, you cannot restart the EC2 instance.
* NAT stands for Network Address Translation.

* If you want your EC2 instance in a private subnet can access the internet, this can be achieved only when it can communicate to the internet. However, we do not want to make a subnet public as we want to maintain the degree of control. To overcome the problem, we need to create either NAT Gateways or NAT Instances.

* In real time, NAT Gateways are highly used than NAT instances as NAT instances are an individual EC2 instances, and NAT Gateways are highly available across multiple availability zones, and they are not on a single EC2 instance.
You can control the security to your VPC in two ways:
 
Security Groups : It acts as a virtual firewall for associated EC2 instances that control both inbound and outbound traffic at the instance level. 

Network access control lists (NACL) : It acts as a firewall for associated subnets that control both inbound and outbound traffic at the subnet level. 
Following are the different database types in RDS :
 
Amazon Aurora
It is a database engine developed in RDS. Aurora database can run only on AWS infrastructure not like MySQL database which can be installed on any local device. It is a MySQL compatible relational database engine that combines the speed and availability of traditional databases with the open source databases. 

Postgre SQL
         * PostgreSQL is an open source relational database for many developers and startups.
         * It is easy to set up, operate, and can also scale PostgreSQL deployments in the cloud.
         * You can also scale PostgreSQL deployments in minutes with cost-efficient.
         * PostgreSQL database manages time-consuming administrative tasks such as PostgreSQL software installation, storage management, and backups for disaster recovery.
 
MySQL
* It is an open source relational database.
* It is easy to set up, operate, and can also scale MySQL deployments in the cloud.
* By using Amazon RDS, you can deploy scalable MySQL servers in minutes with cost-efficient.

MariaDB
* It is an open source relational database created by the developers of MySQL.
* It is easy to set up, operate, and can also scale MariaDB server deployments in the cloud.
* By using Amazon RDS, you can deploy scalable MariaDB servers in minutes with cost-efficient.
* It frees you from managing administrative tasks such as backups, software patching, monitoring, scaling and replication.

Oracle
* It is a relational database developed by Oracle.
* It is easy to set up, operate, and can also scale Oracle database deployments in the cloud.
* You can deploy multiple editions of Oracle in minutes with cost-efficient.
* It frees you from managing administrative tasks such as backups, software patching, monitoring, scaling and replication.
* You can run Oracle under two different licensing models: "License Included" and "Bring Your Own License (BYOL)". In License Included service model, you do need have to purchase the Oracle license separately as it is already licensed by AWS. In this model, pricing starts at $0.04 per hour. If you already have purchased the Oracle license, then you can use the BYOL model to run Oracle databases in Amazon RDS with pricing starts at $0.025 per hour.

SQL Server
* SQL Server is a relational database developed by Microsoft.
* It is easy to set up, operate, and can also scale SQL Server deployments in the cloud.
* You can deploy multiple editions of SQL Server in minutes with cost-efficient.
* It frees you from managing administrative tasks such as backups, software patching, monitoring, scaling and replication.
RTO (Recovery Time Objective) refers to the maximum waiting time for resumption of AWS services/operations during an outage/disaster. Due to unexpected failure, firms have to wait for the recovery process, and the maximum waiting time for an organization is defined as the RTO. When an organization starts using AWS, they have to set their RTO, which can also be called a metric. It defines the time firms can wait during disaster recovery of applications and business processes on AWS. Organizations calculate their RTO as part of their BIA (Business Impact Analysis).
 
Like RTO, RPO (Recovery Point Objective) is also a business metric calculated by a business as part of its BIA. RPO defines the amount of data a firm can afford to lose during an outage or disaster. It is measured in a particular time frame within the recovery period. RPO also defines the frequency of data backup in a firm/organization. For example, if a firm uses AWS services and its RPO is 3 hours, then it implies that all its data/disk volumes will be backed up every three hours.
Multi-AZ RDS is helpful to make a replica of the production database to be available in other availability zones. They come handy in case of disaster recovery and primary database shutdown, to have a complete set of database as a backup.
The processor state control has 2 states, namely :
 
The C State : Represents the sleep state. Varies from c0 to c6, where c6 is the deepest sleep state for a processor.

The P State :
Represents the performance state. Varies from p0 to p15, where p15 is the lowest possible frequency.

A processor has multiple cores, and each of them requires thermal headroom for gaining a boost in performance. Hence, the temperature needs to be kept at an optimal level so that the cores can perform at their highest.
 
When a core is put into the sleep state then it results in a reduction of the overall temperature of the processor. This gives an opportunity to other cores for giving out a better performance. Hence, a strategy can be devised by properly putting some cores to sleep and others in a performance state to get an overall performance boost from the processor.
 
Instances like the c4.8xlarge allow customizing the C and P states for customizing the processor performance according to the workload.
While the c4.8xlarge instance will be preferred for the master machine, the i2.large instance seems fit for the slave machine. Another way is to launch the Amazon EMR instance that automatically configures the servers.
 
Hence, you need not deal with manually configuring the instance and installing Hadoop cluster while using Amazon EMR instance. Simply dump the data to be processed in S3. EMR picks it up from there, processes the same, and then dumps it back into S3.
A Stateful Firewall is the one that maintains the state of the rules defined. It requires you to define only inbound rules. Based on the inbound rules defined, it automatically allows the outbound traffic to flow. 
 
On the other hand, a Stateless Firewall requires you to explicitly define rules for inbound as well as outbound traffic. 
 
For example, if you allow inbound traffic from Port 80, a Stateful Firewall will allow outbound traffic to Port 80, but a Stateless Firewall will not do so.
An Administrator User will be similar to the owner of the AWS Resources. He can create, delete, modify or view the resources and also grant permissions to other users for the AWS Resources.
 
A Power User Access provides Administrator Access without the capability to manage the users and permissions. In other words, a user with Power User Access can create, delete, modify or see the resources, but he cannot grant permissions to other users.
An Instance Store Volume is temporary storage that is used to store the temporary data required by an instance to function. The data is available as long as the instance is running. As soon as the instance is turned off, the Instance Store Volume gets removed and the data gets deleted.
 
On the other hand, an EBS Volume represents a persistent storage disk. The data stored in an EBS Volume will be available even after the instance is turned off.
Recovery Time Objective : It is the maximum acceptable delay between the interruption of service and restoration of service. This translates to an acceptable time window when the service can be unavailable.
 
Recover Point Objective : It is the maximum acceptable amount of time since the last data restore point. It translates to the acceptable amount of data loss which lies between the last recovery point and the interruption of service.
Yes, it is possible by using the Multipart Upload Utility from AWS. With the Multipart Upload Utility, larger files can be uploaded in multiple parts that are uploaded independently. You can also decrease upload time by uploading these parts in parallel. After the upload is done, the parts are merged into a single object or file to create the original file from which the parts were created.
Following are the policies that can be set for user’s passwords :
 
* You can set a minimum length of the password.
* You can ask the users to add at least one number or special character to the password.
* Assigning the requirements of particular character types, including uppercase letters, lowercase letters, numbers, and non-alphanumeric characters.
* You can enforce automatic password expiration, prevent the reuse of old passwords, and request for a password reset upon their next AWS sign-in.
* You can have the AWS users contact an account administrator when the user has allowed the password to expire.
Most of the AWS services have their logging options. Also, some of them have an account level logging, like in AWS CloudTrail, AWS Config, and others. Let’s take a look at two services in specific:
 
AWS CloudTrail : This is a service that provides a history of the AWS API calls for every account. It lets you perform security analysis, resource change tracking, and compliance auditing of your AWS environment as well. The best part about this service is that it enables you to configure it to send notifications via AWS SNS when new logs are delivered.
 
AWS Config  : This helps you understand the configuration changes that happen in your environment. This service provides an AWS inventory that includes configuration history, configuration change notification, and relationships between AWS resources. It can also be configured to send information via AWS SNS when new logs are delivered.
DDoS is a cyber-attack in which the perpetrator accesses a website and creates multiple sessions so that the other legitimate users cannot access the service. The native tools that can help you deny the DDoS attacks on your AWS services are:
 
* AWS Shield
* AWS WAF
* Amazon Route53
* Amazon CloudFront
* ELB
* VPC
The three major types of virtualization in AWS are : 
 
Hardware Virtual Machine (HVM) : It is a fully virtualized hardware, where all the virtual machines act separate from each other. These virtual machines boot by executing a master boot record in the root block device of your image.

Paravirtualization (PV) : Paravirtualization-GRUB is the bootloader that boots the PV AMIs. The PV-GRUB chain loads the kernel specified in the menu.

Paravirtualization on HVM : PV on HVM helps operating systems take advantage of storage and network I/O available through the host.
Solaris is an operating system that uses SPARC processor architecture, which is not supported by the public cloud currently.
 
AIX is an operating system that runs only on Power CPU and not on Intel, which means that you cannot create AIX instances in EC2.
 
Since both the operating systems have their limitations, they are not currently available with AWS.
AWS Identity and Access Management (IAM) allows an administrator to provide multiple users and groups with granular access. Various user groups and users may require varying levels of access to the various resources that have been developed. We may assign roles to users and create roles with defined access levels using IAM.
 
It further gives us Federated Access, which allows us to grant applications and users access to resources without having to create IAM Roles.
DynamoDB is an appropriate choice for collecting eCommerce data as it is an unstructured form of data. Real-time analysis of the collected eCommerce data can be carried out using Amazon Redshift.
Before updating the original instance, AWS Elastic Beanstalk readies a duplicate copy of the instance. Thereafter, it routes the traffic to the duplicate instance so as to avoid a scenario where the update application fails.
 
In case there is a failure in the update process, the AWS Elastic Beanstalk will switch back to the original instance using the very same duplicate copy it created before beginning the update process.
Even though the underlying infrastructure appears healthy, Beanstalk is able to detect if the application isn’t responding on the custom link. It then logs the situation as an environmental event, which can then be checked in detail and thus, acted upon.
 
AWS Elastic Beanstalk apps have a built-in system for avoiding underlying infrastructure failures. The Beanstalk uses the Auto Scaling feature to automatically launch a new instance in case an Amazon EC2 instance fails.
The automatic rollback on error feature is enabled when one of the resources in a stack can’t be created successfully in AWS OpsWorks. The feature results in the deletion of all the successfully created AWS resources until the point of the occurrence of the error.
 
Doing so ensures that no error-causing data is left behind as well as abiding by the principle that the stacks are either created completely or not created at all.
 
The automatic rollback on error feature is useful especially in cases where one might unknowingly exceed the limit of the total number of Elastic IP addresses or does not have access to the EC2 AMI.
Lifecycle hooks help to add wait time before launch or termination of an instance for extraction of log files or installation of necessary software respectively.
API tools such as API Fortress, Scripting languages like Perl and hybrid cloud management tools like Scarl are few such automation gears helpful for Spin Up Services.
Any security group that regulates traffic among instances and various AWS resources is a Stateful firewall.
 
A Stateless firewall is an Access Control List on a network at the subnet level and can allow or deny traffic based on rules.
It is a Data Firehouse that can help in stacking information in Information Stores or devices without the need for a continuous organization.
Amazon Lightsail is a service that helps to build and manage websites and applications faster and with ease. It provides easy-to-use virtual private server instances, storage, and databases cost-effectively. Not just that, you can create and delete development sandboxes using this service, which will help to test new ideas without taking any risk.
It is known as Amazon Elastic Container Registry (ECR). It provides high-performance hosting so that you can store your application images securely in ECR. Amazon ECS compresses and encrypts images and controls access to images. The images can be simply stored in containers; also, they can be accessed from the containers without the support of any management tools.
Amazon EFS is a simple and serverless Elastic File System. It allows adding or removing files on the file system without provisioning and management. This service creates file systems using EC2 launch instance wizard, EFS Console, CLI, and API. You can reduce costs significantly since accessed files will be moved automatically over a period.
It is a purpose-built graph database that helps execute queries with easy navigation on datasets. Here, you can use graph query languages to execute queries, which will perform effectively on connected datasets. Moreover, Amazon Neptune’s graph database engine can store billions of relationships and query the graph with milliseconds latency. This service is mainly used in fraud detection, knowledge graphs, and network security.
This AWS service helps to protect VPCs (Virtual Private Cloud) against attacks. In this service, scaling is carried out automatically as per the traffic flow in the network. You can define your firewall rules using Network Firewall's flexible rules engine; therefore, you can get reasonable control over the network traffic. Network Firewall can work alongside AWS firewall manager to build and apply security policies on all VPCs and accounts.
* AWS Snowcone
* AWS Snowball
* AWS Snowmobile
Throughput optimized HDDs are magnetic type storage that defines performance based on throughput. It is suitable for frequently accessed, large and sequential workloads.
 
Cold HDD volumes are also magnetic-type storages where performance is calculated based on throughput. These storages are inexpensive and best suitable for infrequent sequential and large cold workloads.
AWS Copilot CLI is known as ‘Copilot Command-Line Interface’, which helps users deploy and manage containerized applications. Here, each step in the deployment lifecycle is automated; the steps include pushing to a registry, creating a task definition, and clustering. Therefore, it saves time for planning the necessary infrastructure to run applications.
Generally, AWS Elastic Disaster Recovery is built on Cloud Endure Disaster Recovery; therefore, both services have similar capabilities. They help you to:
 
* Ease the setup, operation, and recovery processes for many applications
* Perform non-disruptive disaster recovery testing and drills
* Recover RPOs in seconds and TROs in minutes
* Recover from a previous point-in-time
Amazon cloud search features :
 
* AutoComplete advice
* Boolean Searches
* Entire text search
* Faceting term boosting
* Highlighting
* Prefix Searches
* Range searches
To update AMI tools at the Boot-Time on Linux, you will have to do the following :
 
* # Update to Amazon EC2 AMI tools
* echo ” + Updating EC2 AMI tools”
* yum update -y aws-amitools-ec2
* echo ” + Updated EC2 AMI tools”
Currently you can create 200 subnets per VPC.
VPC flow logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow logs data can be published to either Amazon CloudWatch Logs or Amazon S3. You can monitor your VPC flow logs to gain operational visibility about your network dependencies and traffic patterns, detect anomalies and prevent data leakage, or troubleshoot network connectivity and configuration issues. The enriched metadata in flow logs help you gain additional insights about who initiated your TCP connections, and the actual packet-level source and destination for traffic flowing through intermediate layers such as the NAT Gateway. You can also archive your flow logs to meet compliance requirements.