Google News
Cloud Computing Interview Questions
Amazon AWS provides Shields for security against attacks. AWS Shields uses two tiers of security- Standard and Advanced. 
Standard AWS Shield, which comes by default with AWS, can be used as a first-measure security gate. It protects network and transport layers.
Subsequently, one can also subscribe to Shield Advanced for another layer of added security. The AWS Advanced Shield provides integration with AWS Web Application Firewall (WAF). AWS WAF provides custom rules to filter out traffic with threat signatures.
Web Application Firewall provides three main actions : allow all requests for a rule, block all requests, and count all requests for a new policy.
Aurora is the database engine that gives reliability and speed at par with industry-standard databases. It backs up data in AWS S3 in real-time without any performance impact. It backs up storage in a routine fashion without the hassle of Database administrators interfering.
RDS (Amazon Relational Database System) is the traditional relational database that provides scalability and cost-effective solutions for storing data. It supports six database engines, i.e. MySQL, Postgres, Amazon Aurora, MariaDB, Microsoft SQL Server, Oracle.
Google Compute Engine is an IAAS product that provides easy to run and flexible virtual machines with Windows and Linux as host OS. It runs on local storage, KVM, or durable storage.
Google Compute Engine integrates with other GCP technologies (e.g., BigQuery, Google Cloud Storage)  to make more complex systems.
While Google Compute App is a PaaS offering under Google Cloud Platform, it provides the platform for customers to run their services or applications. It offers Infrastructure and environment both to the client, so it forms the larger umbrella with IaaS under it.
* XML API and JSON API, used predominantly.
* Google cloud platform console.
* Google storage client libraries.
* gsutil command-line tool.
The hybrid cloud term is supposed to be integrating public and private clouds.
Hybrid IT is what results when hybrid cloud efforts in organizations become more of advanced virtualization and automation environments with various features. And there haven’t been a lot of success stories of organizations being able to really build and maintain real hybrid clouds.
They’ve done some things with OpenStack, but, for the most part, private cloud-inspired environments powered by VMware dominate. Therefore, a substitute term — hybrid IT — actually better describes the bulk of hybrid scenarios. This does not, however, change the need for clarity in terminology.
The hybrid cloud must involve some combination of cloud styles (private, public, community), but physical location is not a definitive aspect of the style. The bottom line is that most users of the hybrid cloud term have really meant hybrid IT thus far.
The distributed cloud may be defined as the distribution of public cloud services to different physical locations. In contrast, operation, governance, updates, and the evolution of the services are the responsibility of the originating public cloud provider.
Distributed cloud computing is a style of cloud computing where the location of the cloud services is a critical component of the model. Historically, the location has not been relevant to cloud computing definitions, although issues related to it are essential in many situations. While many people claim that a private cloud or hybrid cloud requires on-premises computing, this is a misconception.
A private cloud can be done in a hosted data center or, more often, in virtual individual cloud instances, which are not on-premises. Likewise, the hybrid cloud does not require that the individual components of the hybrid are in any specific location. However, with the advent of distributed cloud, location formally enters the definition of a style of cloud services.
Distributed cloud supports the tethered and untethered operation of like-for-like cloud services from the public cloud “distributed” out to specific and varied physical locations. This enables an essential characteristic of distributed cloud operation — low-latency compute where the to compute operations for the cloud services are closer to those who need the capabilities. This can result in major upgrades in performance and reduce the risk of global network-related outages.
Cloud Controller
* Automatically creates virtual machines and controllers
* Deploys applications
* Connects to services
* Automatically scales up and down

Storage Services
* Object
* Relational
* Block storage

Applications Stored in Storage Services
* Simple-to-scale applications
* Easier recovery from failure
Microservices help create apps that consist of codes that are independent of one another and the platform they were developed on.  Microservices are important in the cloud because of the following reasons :
* Each of them is built for a particular purpose. This makes app development simpler.
* They make changes easier and quicker. 
* Their scalability makes it easier to adapt the service as needed.
Edge computing is a part of the distributed computing structure. It brings companies closer to the sources of data. This benefits businesses by giving them better insights, good response time and better bandwidth.
Cloud Computing consists of different data centers as follows :
Containerized data centers : Containerized data centers are the packages that contain a consistent set of servers, network components, and storage delivered to large warehouse kind of facilities. Here, each deployment is relatively unique.

Low-density data centers : Containerized data centers promote heavy density which in turn causes much heat and significant engineering troubles. Low-density data centers are the solution to this problem. Here, the equipment is established far apart so that it cools down the generated heat.