Google News
Deep Learning Interview Questions
Capsules are a vector specifying the features of the object and its likelihood. These features can be any of the instantiation parameters like position, size, orientation, deformation, velocity, hue, texture and much more.
capsule neural network
A capsule can also specify its attributes like angle and size so that it can represent the same generic information. Now, just like a neural network has layers of neurons, a capsule network can have layers of capsules.
The layer between the encoder and decoder, ie. the code is also known as Bottleneck. This is a well-designed approach to decide which aspects of observed data are relevant information and what aspects can be discarded.


It does this by balancing two criteria :
* Compactness of representation, measured as the compressibility.
* It retains some behaviourally relevant variables from the input.
Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units. Typically, the number of hidden units is much less than the number of visible ones. The task of training is to minimize an error or reconstruction, i.e. find the most efficient compact representation for input data.

Restricted Boltzmann Machine

RBM shares a similar idea, but it uses stochastic units with particular distribution instead of deterministic distribution. The task of training is to find out how these two sets of variables are actually connected to each other.
One aspect that distinguishes RBM from other autoencoders is that it has two biases. The hidden bias helps the RBM produce the activations on the forward pass, while The visible layer’s biases help the RBM learn the reconstructions on the backward pass.
Generative adversarial networks are used to achieve generative modeling in Deep Learning. It is an unsupervised task that involves the discovery of patterns in the input data to generate the output.
The generator is used to generate new examples, while the discriminator is used to classify the examples generated by the generator.
Generative adversarial networks are used for a variety of purposes. In the case of working with images, they have a high amount of traction and efficient working.
Creation of art : GANs are used to create artistic images, sketches, and paintings.

Image enhancement :
They are used to greatly enhance the resolution of the input images.

Image translation :
They are also used to change certain aspects, such as day to night and summer to winter, in images easily.
Traditional machine learning refers to a set of algorithms and approaches that have been widely used for many years in a variety of applications, such as linear regression, logistic regression, decision trees, and random forests, among others. These algorithms make use of hand-engineered features and rely on feature engineering to extract relevant information from the data.

Deep learning, on the other hand, is a subfield of machine learning that uses artificial neural networks with multiple layers (hence "deep") to learn complex representations of the input data. In deep learning, the features are learned automatically by the network, rather than being hand-engineered by the programmer. This allows deep learning models to automatically extract high-level features from raw data, such as images, audio, and text, and to make predictions based on those features.
The main difference between deep learning and traditional machine learning is the level of abstraction in the learned representations. Deep learning models learn hierarchical representations of the data, with each layer learning increasingly higher-level features. In contrast, traditional machine learning models typically only learn a single level of features.

Another key difference is the amount of data required to train a model. Deep learning models require large amounts of data to train effectively, while traditional machine learning models can often be trained with smaller amounts of data.

Simple Answer :  Deep learning is a subfield of machine learning that uses deep artificial neural networks to learn high-level representations of the data, while traditional machine learning algorithms make use of hand-engineered features and require relatively small amounts of data to train.