Deep Learning - Quiz(MCQ)

A)

1943

B)

1962

C)

1978

D)

1989

Correct Answer : 1943

Explanation : The history of deep learning dates back to **1943** when **Warren McCulloch** and **Walter Pitts** created a computer model based on the **neural networks** of the human brain.

A)

Ilya Sutskever

B)

David Rumelhart

C)

Alex Krizhevsky

D)

Frank Rosenblatt

Correct Answer : Frank Rosenblatt

Explanation : We conclude that **Frank Rosenblatt** developed and explored all the basic ingredients of the **deep learning** systems of today, and that he should be recognized as a Father of Deep Learning, perhaps together with **Hinton**, **LeCun** and **Bengio** who have just received the Turing Award as the fathers of the deep learning revolution.

A)

2

B)

3

C)

4

D)

5

Correct Answer : 3

Explanation : Deep learning algorithms are constructed with 3 connected layers : ***** inner layer***** outer layer***** hidden layer

A)

SciPy

B)

Numpy

C)

Deep learning

D)

All of the above

Correct Answer : Deep learning

Explanation : **Deep learning** is a computer software that mimics the network of neurons in a brain. It is a subset of machine learning and is called **deep learning**.

A)

hidden layer

B)

outer layer

C)

inner layer

D)

None of the above

Correct Answer : inner layer

Explanation : The first layer is called the Input Layer. The last layer is called the Output Layer. All layers in between are called Hidden Layers.

A)

a more precise but slower update.

B)

a less precise but faster update.

C)

a less precise and slower update.

D)

a more precise and faster update.

Correct Answer : a more precise but slower update.

A)

Dropout

B)

Pooling

C)

Early stopping

D)

Data augmentation

Correct Answer : Pooling

A)

Protein structure prediction

B)

Detection of exotic particles

C)

Prediction of chemical reactions

D)

All of the above

Correct Answer : All of the above

Explanation : We can use neural network to approximate any function so it can theoretically be used to solve any problem.

A)

It can be used for feature pooling

B)

It can help in dimensionality reduction

C)

It suffers less overfitting due to small kernel size

D)

All of the above

Correct Answer : All of the above

Explanation : 1×1 convolutions are called bottleneck structure in CNN.

10 .

A)

Recurrent Neural Networks

B)

Report Neural Networks

C)

Receives Neural Networks

D)

Recording Neural Networks

Correct Answer : Recurrent neural networks

Explanation : **Recurrent neural networks (RNNs) :** **RNN** is a **multi-layered neural network** that can store information in context nodes, allowing it to learn data sequences and output a number or another sequence.

A)

Reinforcement Learning

B)

Recurrent Neural Networks

C)

Convolutional Neural Networks

D)

Feed-forward Neural Networks

Correct Answer : Convolutional Neural Networks

Explanation : Convolutional Neural Networks(CNN) is a multi-layered neural network with a unique architecture designed to extract increasingly complex features of the data at each layer to determine the output. CNNs are well suited for perceptual tasks.

A)

structured data

B)

unstructured data

C)

Both A and B

D)

None of the above

Correct Answer : unstructured data

Explanation : CNN is mostly used when there is an unstructured data set (e.g., images) and the practitioners need to extract information from it.

A)

Deep neural network

B)

Shallow neural network

C)

Recurrent neural networks

D)

Feed-forward neural networks

Correct Answer : Shallow neural network

Explanation : **Shallow neural network :** The Shallow neural network has only one hidden layer between the input and output.

A)

50

B)

Less than 50

C)

More than 50

D)

It is an arbitrary value

Correct Answer : 50

Explanation : Since **MLP** is a fully connected directed graph, the number of connections are a multiple of number of nodes in input layer and hidden layer.

A)

20x20

B)

21x21

C)

22x22

D)

25x25

Correct Answer : 22x22

Explanation : The size of the convoluted matrix is given by

, where C is the size of the Convoluted matrix, I is the size of the input matrix, F the size of the filter matrix and P the padding applied to the input matrix. Here **C=((I-F+2P)/S)+1****P=0, I=28, F=7 ****and**

. There the answer is ** S=1****22**.

A)

stochastic update

B)

asynchronous operation

C)

fully connected network with both hidden and visible units

D)

All of the Above

Correct Answer : All of the Above

A)

In a FCNN, there are connections between neurons of a same layer.

B)

A FCNN with only linear activations is a linear network.

C)

In a FCNN, the most common weight initialization scheme is the zero initialization, because it leads to faster and more robust training.

D)

None of the above

Correct Answer : None of the above

A)

[1 X 5] , [5 X 8]

B)

[5 x 1] , [8 X 5]

C)

[8 X 5] , [5 X 1]

D)

[8 X 5] , [ 1 X 5]

Correct Answer : [5 x 1] , [8 X 5]

Explanation : The size of weights between any layer 1 and layer 2 Is given by [nodes in layer 1 X nodes in layer 2].

A)

Provide a caption for images

B)

Detect fraudulent credit-card transaction

C)

BusinessesHelp securities traders to generate analytic reports

D)

All of the Above

Correct Answer : All of the Above

Explanation : All of the above are Common uses of RNNs.

A)

Data Labeling

B)

Obtain Huge Training Datasets

C)

Both A and B

D)

None of the above

Correct Answer : Both A and B

Explanation : Both A and B are Limitations of Deep Learning.

A)

41%

B)

37%

C)

31%

D)

28%

Correct Answer : 41%

Explanation : **Deep Learning** can outperform traditional method. For instance, **deep learning algorithms** are **41%** more accurate than **machine learning algorithm** in image classification, **27%** more accurate in facial recognition and **25%** in voice recognition.

A)

Weight between input and hidden layer

B)

Weight between hidden and output layer

C)

Activation function of output layer

D)

Biases of all hidden layer neurons

Correct Answer : Weight between input and hidden layer

Explanation : Weights between input and hidden layer are constant.

A)

Tanh

B)

Softmax

C)

ReLu

D)

Sigmoid

Correct Answer : Softmax

Explanation : **Softmax** function is of the form in which the sum of probabilities over all **k sum to 1**.

A)

True

B)

False

C)

--

D)

--

Correct Answer : False

Explanation : This is not always true. If we have a max pooling layer of pooling size as 1, the parameters would remain the same.

A)

True

B)

False

C)

--

D)

--

Correct Answer : True

Explanation : Option A is correct. This is because from a sequence of words, you have to predict whether the sentiment was positive or negative.

A)

128

B)

64

C)

256

D)

96

Correct Answer : 96

Explanation : The output will be calculated as 3(1*4+2*5+6*3) = 96

A)

A feedback network with hidden units

B)

A feed forward network with hidden units

C)

A feedback network with hidden units and probabilistic update

D)

A feed forward network with hidden units and probabilistic update

Correct Answer : A feedback network with hidden units and probabilistic update

Explanation : Boltzman machine is a feedback network with hidden units and probabilistic update.

Advertisement