Google News
logo
Neural Networks - Quiz(MCQ)
A)
Claude Shannon
B)
Vannevar Bush
C)
Warren Sturgis McCulloch
D)
John von Neumann

Correct Answer :   Warren Sturgis McCulloch


Explanation : Neural networks were first proposed in 1944 by Warren Sturgis McCulloch and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what's sometimes called the first cognitive science department.

A)
to make smart human interactive & user friendly system
B)
to apply heuristic search methods to find solutions of problem
C)
to solve tasks like machine vision & natural language processing
D)
All of the Above

Correct Answer :   All of the Above


Explanation : These are the basic aims that a neural network achieve.

A)
to be versatile
B)
to be task specific
C)
to solve complex problems
D)
to bring computer more & more closer to user

Correct Answer :   to bring computer more & more closer to user


Explanation : Software should be more interactive to the user, so that it can understand its problem in a better fashion.

A)
Serial
B)
Serial or parallel
C)
Parallel
D)
None of the Above

Correct Answer :   serial or parallel


Explanation : General characteristics of neural networks.

A)
pattern classification
B)
equal
C)
adjustment of weights
D)
either of them can be fast, depending on conditions

Correct Answer :   pattern classification


Explanation : Memory is addressable, so thus pattern can be easily classified.

A)
distributive nature of networks
B)
associative nature of networks
C)
both associative & distributive
D)
None of the Above

Correct Answer :   both associative & distributive

A)
Hopfield model of neuron
B)
McCulloch-pitts neuron model
C)
Marvin Minsky neuron model
D)
None of the Above

Correct Answer :   McCulloch-pitts neuron model


Explanation : McCulloch-pitts neuron model can perform weighted sum of inputs followed by threshold logic operation.

A)
human have emotions
B)
human have sense organs
C)
human have more IQ & intellect
D)
human perceive everything as a pattern while machine perceive it merely as data

Correct Answer :   human perceive everything as a pattern while machine perceive it merely as data


Explanation : Humans have emotions & thus form different patterns on that basis, while a machine(say computer) is dumb & everything is just a data for him.

A)
inputs
B)
predicting the future inputs
C)
related to storage & recall task
D)
find relation between 2 consecutive inputs

Correct Answer :   related to storage & recall task


Explanation : This is the basic definition of auto-association in neural networks.

A)
adaptive linear element
B)
adaptive line element
C)
automatic linear element
D)
None of the Above

Correct Answer :   adaptive linear element

A)
Representation of biological neural networks
B)
Mathematical representation of our understanding
C)
Both First & Second
D)
None of the Above

Correct Answer :   Both First & Second

A)
1
B)
2
C)
3
D)
4

Correct Answer :   2

A)
Recurrent Neural Network
B)
Recurring Neural Network
C)
Removable Neural Network
D)
None of the Above

Correct Answer :   Recurrent Neural Network

A)
Features of group explicitly stated
B)
Neither feature & nor number of groups is known
C)
Number of groups may be known
D)
None of the above

Correct Answer :   Neither feature & nor number of groups is known

A)
Text recognition
B)
Voice recognition
C)
Image recognition
D)
None of the Above

Correct Answer :   Voice recognition


Explanation : Since same vowel may occur in different context & its features vary over overlapping regions of different vowels.

A)
More generalized
B)
Time consuming
C)
Highly restricted
D)
None of the Above

Correct Answer :   Highly restricted


Explanation : Point to point pattern matching is carried out in the process.

A)
Output is static
B)
Input pattern has become static
C)
Output pattern keeps on changing
D)
Input pattern keeps on changing

Correct Answer :   Input pattern keeps on changing


Explanation : Dynamic nature of input patterns in an AI(Artificial Intelligence) problem.

A)
Static inputs
B)
Static inputs & categorization can’t be handled
C)
System can neither be stable nor plastic
D)
Dynamic inputs & categorization can’t be handled

Correct Answer :   Dynamic inputs & categorization can’t be handled


Explanation : If system is allowed to change its categorization according to inputs it cannot be used for patterns classification & assessment.

A)
Boltzman machine
B)
Perceptron
C)
Learning algorithms
D)
None of the Above

Correct Answer :   Boltzman machine


Explanation : Ackley, Hinton built the Boltzman machine.

A)
Hopfield
B)
Marvin Minsky
C)
McCulloch-pitts
D)
None of the Above

Correct Answer :   Marvin Minsky


Explanation : In 1954 Marvin Minsky developed the first learning machine in which connection strengths could be adapted automatically & efficiebtly.

A)
Hopfield
B)
Marvin Minsky
C)
Rosenblatt
D)
McCulloch-pitts

Correct Answer :   Rosenblatt


Explanation : Rosenblatt proposed the first perceptron model in 1958 .

A)
Energy analysis
B)
Learning algorithms
C)
Adaptive signal processing
D)
None of the Above

Correct Answer :   Energy analysis


Explanation : It was of major contribution of his works in 1982.

A)
Flexibility
B)
Collective computation
C)
Robustness & fault tolerance
D)
All of the Above

Correct Answer :   All of the Above


Explanation : AI network should be all of the above mentioned.

A)
Axon
B)
Neuron
C)
Brain
D)
Nucleus

Correct Answer :   Neuron


Explanation : Neuron is the most basic & fundamental unit of a network .

A)
Fibers of nerves
B)
Nuclear projections
C)
Other name for nucleus
D)
None of the Above

Correct Answer :   Fibers of nerves

A)
oval
B)
round
C)
tree
D)
rectangular

Correct Answer :   tree

A)
Physical Process
B)
Chemical Process
C)
Both (A) and (B)
D)
None of the Above

Correct Answer :   Chemical Process

A)
Below 5
B)
5-10
C)
10-80
D)
Above 100

Correct Answer :   10-80


Explanation : Average size of neuron body lies in the above limit.

A)
50
B)
100
C)
150
D)
200

Correct Answer :   200

A)
Negative
B)
Neutral
C)
Positive
D)
May be positive or negative

Correct Answer :   Negative


Explanation : It is due to the presence of potassium ion on outer surface in neural fluid.

A)
Iron
B)
Sodium
C)
Potassium
D)
None of the Above

Correct Answer :   Potassium


Explanation : Potassium is the main constituent of neural liquid & responsible for potential on neuron body.

A)
+70mv
B)
-70mv
C)
+35mv
D)
-35mv

Correct Answer :   -70mv


Explanation : It is a basic fact, founded out by series of experiments conducted by neural scientist.

A)
2-5m/s
B)
0.5-2m/s
C)
5-10m/s
D)
None of the Above

Correct Answer :   0.5-2m/s


Explanation : The process is very fast but comparable to the length of neuron.

A)
Regenerate & retain its original capacity
B)
Never be imperturbable to neural liquid
C)
Only the certain part get affected, while rest becomes imperturbable again
D)
None of the Above

Correct Answer :   Regenerate & retain its original capacity

A)
-50mv
B)
-35mv
C)
-65mv
D)
-60mv

Correct Answer :   -60mv


Explanation : Cell membrane looses it impermeability against Na+ ions at -60mv.

A)
receptors
B)
transmitter
C)
transmission
D)
None of the Above

Correct Answer :   transmission


Explanation : Axon is the body of neuron & thus cant be at ends of it so cant receive & transmit signals.

A)
Integrator
B)
Differentiator
C)
Summing
D)
None of the Above

Correct Answer :   Summing


Explanation : Because adding of potential(due to neural fluid) at different parts of neuron is the reason of its firing.

A)
20mv
B)
10mv
C)
15mv
D)
30mv

Correct Answer :   10mv


Explanation : This critical is founded by series of experiments conducted by neural scientist.

A)
Hebb rule learning
B)
Memory based learning
C)
Error correction learning
D)
None of the Above

Correct Answer :   Hebb rule learning

A)
The system learns from its past mistakes
B)
The strength of neural connection get modified accordingly
C)
The system recalls previous reference inputs & respective ideal outputs
D)
None of the Above

Correct Answer :   The strength of neural connection get modified accordingly


Explanation : The strength of neuron to fire in future increases, if it is fired repeatedly.

A)
10³
B)
10⁵
C)
10⁸
D)
10¹¹

Correct Answer :   10¹¹

A)
5*(10⁴)
B)
15*(10²)
C)
15*(10³)
D)
15*(10⁴)

Correct Answer :   15*(10⁴)

A)
Number of neuron is itself not precisely known
B)
Full operation is still not known of biological neurons
C)
Number of interconnection is very large & is very complex
D)
All of the Above

Correct Answer :   All of the Above


Explanation : These are all fundamental reasons, why can’t we design a perfect neural network !

A)
10¹⁵
B)
10¹⁰
C)
10⁵
D)
10²⁰

Correct Answer :   10¹⁵


Explanation : You can estimate this value from number of neurons in human cortex & their density.

A)
Artificial resonance theory
B)
Adaptive resonance theory
C)
Automatic resonance theory
D)
None of the Above

Correct Answer :   Adaptive resonance theory

A)
Inhibitory input
B)
Can be either excitatory or inhibitory as such
C)
Excitatory input
D)
None of the Above

Correct Answer :   Excitatory input


Explanation : Sign convention of neuron.

A)
Excitatory input
B)
Excitatory output
C)
Inhibitory input
D)
Inhibitory output

Correct Answer :   Inhibitory input


Explanation : Sign convention of neuron.

A)
Weight
B)
Input unit
C)
Output unit
D)
Activation value

Correct Answer :   Weight


Explanation : Activation is sum of wieghted sum of inputs, which gives desired output..hence output depends on weights.

A)
Widrow
B)
Minsky & papert
C)
McCullocch-pitts
D)
Rosenblatt

Correct Answer :   Rosenblatt


Explanation : The perceptron is one of the earliest neural networks. Invented at the Cornell Aeronautical Laboratory in 1957 by Frank Rosenblatt, the Perceptron was an attempt to understand human memory, learning, and cognitive processes.

A)
Output unit
B)
Association unit
C)
Summing unit
D)
Sensory units

Correct Answer :   Association unit


Explanation : This was the very speciality of the perceptron model, that is performs association mapping on outputs of he sensory units.

A)
Learning enabled
B)
More inputs can be incorporated
C)
Both (A) and (B)
D)
None of the Above

Correct Answer :   Learning enabled

A)
Difference between desired & target output
B)
Can be both due to difference in target output or environmental condition
C)
Error due to environmental condition
D)
None of the Above

Correct Answer :   Error due to environmental condition


Explanation : All other parameters are assumed to be null while calculatin the error in perceptron model & only difference between desired & target output is taken into account.

A)
Synchronously
B)
Asynchronously
C)
Both Synchronously & Asynchronously
D)
None of the above

Correct Answer :   Both Synchronously & Asynchronously


Explanation : Output can be updated at same time or at different time in the networks.

A)
Output units are updated in parallel fashion
B)
Output units are updated sequentially
C)
Can be either sequentially or in parallel fashion
D)
None of the Above

Correct Answer :   Output units are updated sequentially


Explanation : Output are updated at different time in the networks.

A)
Learning law
B)
Synchronisation
C)
Learning algorithm
D)
Both learning algorithm & law

Correct Answer :   Both learning algorithm & law


Explanation : Basic definition of learning law in neural.

A)
Widrow
B)
Werbos
C)
Hopfield
D)
Rosenblatt

Correct Answer :   Widrow


Explanation : Widrow invented the adaline neural model.

A)
Weights are compared with output
B)
Sensory units result is compared with output
C)
Analog activation value is compared with output
D)
All of the Above

Correct Answer :   Analog activation value is compared with output


Explanation : Analog activation value comparison with output, instead of desired output as in perceptron model was the main point of difference between the adaline & perceptron model.

A)
LMS error learning law
B)
Gradient descent algorithm
C)
Both LMS error & Gradient descent learning law
D)
None of the Above

Correct Answer :   Both LMS error & Gradient descent learning law


Explanation : weight update rule minimizes the mean squared error(delta square), averaged over all inputs & this laws is derived using negative gradient of error surface weight space.

A)
Interlayer
B)
Intralayer
C)
Both Interlayer and Intralayer
D)
None of the Above

Correct Answer :   Both Interlayer and Intralayer


Explanation : Connections between layers can be made to one unit to another and within the units of a layer.

A)
In feedback manner
B)
In feedforward manner
C)
Both feedforward & feedback
D)
Either feedforward & feedback

Correct Answer :   Either feedforward & feedback


Explanation : Connections across the layers in standard topologies can be in feedforward manner or in feedback manner but not both.

A)
when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent
B)
when weight vector for connections from jth unit (say) in F2 approaches the activity pattern in F1(comprises of input vector)
C)
Can be either way
D)
None of the Above

Correct Answer :   when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent


Explanation : Restatement of basic definition of instar.

A)
when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent
B)
when weight vector for connections from jth unit (say) in F2 approaches the activity pattern in F1(comprises of input vector)
C)
Can be either way
D)
None of the Above

Correct Answer :   when weight vector for connections from jth unit (say) in F2 approaches the activity pattern in F1(comprises of input vector)


Explanation : Restatement of basic definition of outstar.

A)
Short Term Memory
B)
Short Topology Memory
C)
Stimulated Topology Memory
D)
None of the Above

Correct Answer :   Short Term Memory


Explanation : Full form of Short Term Memory(STM).

A)
Either way
B)
Encoded pattern information pattern in synaptic weights
C)
Activation state of network
D)
All of the Above

Correct Answer :   Activation state of network

A)
Either way
B)
Activation state of network
C)
Both (A) and (B)
D)
Encoded pattern information pattern in synaptic weights

Correct Answer :   Encoded pattern information pattern in synaptic weights


Explanation : Long-Term Memory (LTM-the encoding and retention of an effectively unlimited amount of information for a much longer period of time) & hence the option.

A)
input vector
B)
learning signal
C)
learning parameters
D)
All of the Above

Correct Answer :   All of the Above


Explanation : Change in weight vector corresponding to jth input at time (t+1) depends on all of these parameters.

A)
∆wij= µf(wi a)aj
B)
∆wij= µ(si) aj, where (si) is output signal of ith input
C)
Both (A) and (B)
D)
None of the Above

Correct Answer :   Both (A) and (B)


Explanation : (si)= f(wi a), in Hebb’s law

A)
hebb learning law
B)
delta learning law
C)
perceptron learning law
D)
none of the above

Correct Answer :   delta learning law


Explanation : Output function in this law is assumed to be linear , all other things same.

A)
LMS
B)
MMS
C)
Hebb
D)
None of the Above

Correct Answer :   LMS


Explanation : LMS, least mean square. Change in weight is made proportional to negative gradient of error & due to linearity of output function.

A)
∆wij= µ(si) aj
B)
∆wij= µ(bi – si) aj
C)
∆wij= µ(bi – (wi a)) aj
D)
∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi

Correct Answer :   ∆wij= µ(bi – si) aj


Explanation : Perceptron learning law is supervised, nonlinear type of learning.

A)
Supervised
B)
Unsupervised
C)
Both supervised or unsupervised
D)
Either supervised or unsupervised

Correct Answer :   Supervised


Explanation : Supervised, since depends on target output.

A)
∆wij= µ(si) aj
B)
∆wij= µ(bi – si) aj
C)
∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
D)
∆wk= µ (a-wk), unit k with maximum output is identified

Correct Answer :   ∆wk= µ (a-wk), unit k with maximum output is identified


Explanation : Follows from basic definition of instar learning law.

A)
neural level dynamics
B)
synaptic dynamics
C)
can be either neural or synaptic dynamics
D)
None of the Above

Correct Answer :   synaptic dynamics


Explanation : Weights are best determined by synaptic dynamics, as it is one fastest & precise dynamics occurring.

A)
Synaptic
B)
Insufficient information
C)
Neural level
D)
None of the Above

Correct Answer :   Neural level

A)
short term memory
B)
long term memory
C)
either short or long term
D)
both short & long term

Correct Answer :   short term memory


Explanation : It depends on input pattern, & input changes from moment to moment, hence Short term memory.

A)
the ability of a pattern recognition system to approximate the desired output values for pattern vectors which are not in the test set.
B)
the ability of a pattern recognition system to approximate the desired output values for pattern vectors which are not in the training set.
C)
Both (A) and (B)
D)
None of the Above

Correct Answer :   the ability of a pattern recognition system to approximate the desired output values for pattern vectors which are not in the training set.

A)
neural
B)
synaptic
C)
activation
D)
both synaptic & neural

Correct Answer :   activation


Explanation : Activation dynamics depends on input pattern, hence any change in input pattern will affect activation dynamics of neural networks.

A)
Limited neural fluid
B)
Limited fan in capacity of inputs
C)
Both limited neural fluid & fan in capacity
D)
None of the Above

Correct Answer :   None of the Above


Explanation : It is due to the limited current carrying capacity of cell membrane.

A)
At saturation state neuron will stop working, while biologically it’s not feasible
B)
How can a neuron with limited operating range be made sensitive to nearly unlimited range of inputs
C)
Can be either way
D)
None of the Above

Correct Answer :   How can a neuron with limited operating range be made sensitive to nearly unlimited range of inputs


Explanation : Threshold value setting has to be adjusted properly.

A)
2
B)
3
C)
4
D)
5

Correct Answer :   2


Explanation : There exist broadly structural & global stability in neural.

A)
When only synaptic dynamics in equilibrium
B)
When only synaptic dynamics in equilibrium
C)
When both synaptic & activation dynamics are simultaneously used & are in equilibrium
D)
None of the Above

Correct Answer :   None of the Above


Explanation : Refers to state equilibrium situation where small perturbations brings network back to equilibrium.

A)
When only synaptic dynamics in equilibrium
B)
When only synaptic & activation dynamics are used
C)
When both synaptic & activation dynamics are simultaneously used & are in equilibrium
D)
None of the Above

Correct Answer :   When both synaptic & activation dynamics are simultaneously used & are in equilibrium


Explanation : Global stability means neuron as a whole is stable.

A)
To make system static
B)
To make system dynamic
C)
To keep operating range of activation value to a specified range
D)
None of the Above

Correct Answer :   To keep operating range of activation value to a specified range


Explanation : Stabilizing & bounding the unbounded range of activation value was the primary goal of this model.

A)
xb(t)=deterministic model
B)
xb(t)=deterministic model*noise component
C)
Both (A) and (B)
D)
xb(t)=deterministic model + noise component

Correct Answer :   xb(t)=deterministic model + noise component


Explanation : Noise is assumed to be additive in nature in stochastic models.

A)
Change in state, when small perturbations occur
B)
Settlement of network, when small perturbations occur
C)
Deviation in present state, when small perturbations occur
D)
None of the Above

Correct Answer :   Settlement of network, when small perturbations occur


Explanation : Follows from basic definition of equilibrium.

A)
xb(t)=0
B)
xb(t)=1
C)
xb(t)=n(t)+1
D)
xb(t)=n(t), where n is noise component

Correct Answer :   xb(t)=n(t), where n is noise component


Explanation : xb(t)=0 is condition for deterministic models, so option c is radical choice.

A)
Update to all units is done at the same time
B)
Change in state of any one unit drive the whole network
C)
Change in state of any number of units drive the whole network
D)
None of the Above

Correct Answer :   Update to all units is done at the same time


Explanation : In asynchronous update, change in state of any one unit drive the whole network.

A)
Convergence of weights
B)
Learning should use only local weights
C)
Learning time should be as small as possible
D)
All of the Above

Correct Answer :   All of the Above


Explanation : These all are the some of basic requirements of learning laws.

A)
Memory
B)
Older memory in general
C)
Short term memory in general
D)
None of the Above

Correct Answer :   Short term memory in general


Explanation : Memory decay affects short term memory rather than older memories.

A)
it is distributed in localised weights
B)
it is distributed in certain proctive weights only
C)
it is distributed all across the weights
D)
All of the Above

Correct Answer :   it is distributed all across the weights


Explanation : pattern information is highly distributed all across the weights.

A)
synaptic strength is proportional to correlation between firing of presynaptic neuron only
B)
synaptic strength is proportional to correlation between firing of post & presynaptic neuron
C)
synaptic strength is proportional to correlation between firing of postsynaptic neuron only
D)
None of the Above

Correct Answer :   synaptic strength is proportional to correlation between firing of post & presynaptic neuron

A)
learning laws which modulate difference between synaptic weight & output signal
B)
learning laws which modulate difference between actual output & desired output
C)
learning laws which modulate difference between synaptic weight & activation value
D)
None of the Above

Correct Answer :   learning laws which modulate difference between synaptic weight & output signal


Explanation : Competitive learning laws modulate difference between synaptic weight & output signal.

A)
synaptic strength is proportional to correlation between firing of postsynaptic neuron only
B)
synaptic strength is proportional to correlation between firing of presynaptic neuron only
C)
synaptic strength is proportional to correlation between firing of post & presynaptic neuron
D)
synaptic strength is proportional to changes in correlation between firing of post & presynaptic neuron

Correct Answer :   synaptic strength is proportional to changes in correlation between firing of post & presynaptic neuron


Explanation : Differential hebbian learning is proportional to changes in correlation between firing of post & presynaptic neuron.

A)
synaptic strength is proportional to changes of postsynaptic neuron only
B)
synaptic strength is proportional to changes of post & presynaptic neuron
C)
synaptic strength is proportional to changes of presynaptic neuron only
D)
None of the Above

Correct Answer :   None of the Above


Explanation : Differential competitive learning is based on to changes of postsynaptic neuron only.

A)
learning is based on evaluative signal
B)
learning is based o desired output for an input
C)
learning is based on both desired output & evaluative signal
D)
None of the Above

Correct Answer :   learning is based on evaluative signal


Explanation : Reinforcement learning is based on evaluative signal.

A)
To determine convergence
B)
To determine stability
C)
Both (A) and (B)
D)
None of the Above

Correct Answer :   To determine stability

A)
v(x) =0
B)
v(x) >=0
C)
v(x) <=0
D)
None of the Above

Correct Answer :   v(x) <=0


Explanation : It is the condition for existence for lyapunov function.

A)
shows the stability of fixed weight autoassociative networks
B)
shows the stability of adaptive autoaassociative networks
C)
shows the stability of adaptive heteroassociative networks
D)
None of the Above

Correct Answer :   shows the stability of fixed weight autoassociative networks


Explanation : Cohen grossberg theorem shows the stability of fixed weight autoassociative networks.

A)
Shows the stability of adaptive heteroassociative networks
B)
Shows the stability of adaptive autoaassociative networks
C)
Shows the stability of fixed weight autoassociative networks
D)
None of the Above

Correct Answer :   Shows the stability of adaptive autoaassociative networks


Explanation : Cohen grossberg kosko shows the stability of adaptive autoaassociative networks.

A)
weight changes are suppressed
B)
input to the network determines the output activation
C)
both process has to happen
D)
none of the above

Correct Answer :   both process has to happen


Explanation : Follows from basic definition of Recall in a network.

A)
Pattern mapping
B)
Pattern association
C)
Pattern classification
D)
All of the Above

Correct Answer :   All of the Above


Explanation : Feedforward networks are used for pattern mapping, pattern association, pattern classification.

A)
genaralization
B)
pattern storage
C)
pattern classification
D)
All of the Above

Correct Answer :   genaralization


Explanation : The network for pattern mapping is expected to perform genaralization.

A)
hoff learning law
B)
widrow learning law
C)
no learning law
D)
hebb learning law

Correct Answer :   hebb learning law


Explanation : For orthogonal input vectors, Hebb learning law is best suited.

A)
hebb learning law
B)
widrow learning law
C)
hoff learning law
D)
no learning law

Correct Answer :   widrow learning law


Explanation : For linear input vectors, widrow learning law is best suited.

A)
scaling
B)
translation
C)
arbitrary rotation
D)
all of the above

Correct Answer :   all of the above


Explanation : Affine transformations can be used to do arbitrary rotation, scaling, translation.

A)
addition of bias term (+1) which results in arbitrary rotation, scaling, translation of input pattern
B)
addition of bias term (-1) or (+1) which results in arbitrary rotation, scaling, translation of input pattern
C)
addition of bias term (-1) which results in arbitrary rotation, scaling, translation of input pattern
D)
none of the above

Correct Answer :   addition of bias term (-1) which results in arbitrary rotation, scaling, translation of input pattern

A)
number of inputs
B)
total number of classes
C)
number of distinct classes
D)
None of the above

Correct Answer :   number of distinct classes


Explanation : Number of output cases depends on number of distinct classes.

A)
adjust weight along with class identification
B)
class identification
C)
weight adjustment
D)
none of the above

Correct Answer :   adjust weight along with class identification

A)
small adjustments in weight is done
B)
no adjustments in weight is done
C)
large adjustments in weight is done
D)
weight adjustments doesn’t depend on classification of input vector

Correct Answer :   no adjustments in weight is done


Explanation : No adjustments in weight is done, since input has been correctly classified which is the objective of the system.

A)
there may exist straight lines that can touch each other
B)
there is only one straight line that separates them
C)
there may exist straight lines that doesn’t touch each other
D)
all of the above

Correct Answer :   there is only one straight line that separates them


Explanation : Linearly separable classes, functions can be separated by a line.

A)
binary
B)
bipolar
C)
both binary and bipolar
D)
none of the above

Correct Answer :   both binary and bipolar


Explanation : The perceptron convergence theorem is applicable for both binary and bipolar input, output data.

A)
when there are restriction but other than linear separability
B)
when there may be restrictions such as linear separability placed on input – output patterns
C)
Both (A) and (B)
D)
when no restrictions such as linear separability is placed on the set of input – output pattern pairs

Correct Answer :   when no restrictions such as linear separability is placed on the set of input – output pattern pairs

A)
all units should be linear
B)
all units should be non – linear
C)
except input layer, all units in other layers should be non – linear
D)
none of the above

Correct Answer :   except input layer, all units in other layers should be non – linear


Explanation : To provide generalization capability to a network, except input layer, all units in other layers should be non – linear.

A)
number of units in third layer
B)
number of units in second layer
C)
overall number of units in hidden layers
D)
none of the above

Correct Answer :   overall number of units in hidden layers


Explanation : The nature of mapping problem decides overall number of units in hidden layers.

A)
using nonlinear differentiable output function for output layers
B)
using nonlinear differentiable output function for output and hidden layers
C)
using nonlinear differentiable output function for hidden layers
D)
it cannot be solved

Correct Answer :   using nonlinear differentiable output function for output and hidden layers


Explanation : Hard learning problem is solved by using nonlinear differentiable output function for output and hidden layers.

A)
the number of inputs
B)
the number of outputs
C)
both the number of inputs and outputs
D)
the overall characteristics of the mapping problem

Correct Answer :   the overall characteristics of the mapping problem


Explanation : The number of units in hidden layers depends on the overall characteristics of the mapping problem.

A)
problem is autoassociation
B)
problem is heteroassociation
C)
can be either auto or heteroassociation
D)
none of the above

Correct Answer :   problem is autoassociation


Explanation : When a(l)=b(l) problem is classified as autoassociation.

A)
design of network
B)
nature of input-output
C)
both input & design
D)
None of the above

Correct Answer :   both input & design


Explanation : The recalled output in pattern association problem depends on both input & design of network.

A)
network exhibits accretive behaviour
B)
network exhibits interpolative behaviour
C)
exhibits both accretive & interpolative behaviour
D)
none of the above

Correct Answer :   network exhibits interpolative behaviour


Explanation : This follows from basic definition in neural.

A)
to develop learning algorithm for multilayer feedforward neural network
B)
to develop learning algorithm for single layer feedforward neural network
C)
to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly
D)
none of the above

Correct Answer :   to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly


Explanation : The objective of backpropagation algorithm is to to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly.

A)
it is also called generalized delta rule
B)
there is no feedback of signal at nay stage
C)
error in output is propagated backwards only to determine weight updates
D)
all of the above

Correct Answer :   all of the above


Explanation : These all statements defines backpropagation algorithm.

A)
scaling
B)
slow convergence
C)
local minima problem
D)
all of the above

Correct Answer :   all of the above


Explanation : These all are limitations of backpropagation algorithm in general.

A)
on basis of average gradient value
B)
no heuristic criteria exist
C)
there is convergence involved
D)
none of the above

Correct Answer :   on basis of average gradient value


Explanation : If average gadient value fall below a preset threshold value, the process may be stopped.

A)
recall
B)
pattern storage
C)
pattern classification
D)
all of the above

Correct Answer :   pattern storage


Explanation : By using non – linear output function for each processing unit, a feedback network can be used for pattern storage.

A)
number of units
B)
strength of connecting links
C)
both number of units and strength of connecting links
D)
none of the above

Correct Answer :   both number of units and strength of connecting links


Explanation : The number of patterns that can be stored in a given network depends on number of units and strength of connecting links.

A)
no effect
B)
error in recall
C)
pattern storage is not possible in that case
D)
none of the above

Correct Answer :   error in recall


Explanation : Due to additional false minima, there is error in recall.

A)
due to noise
B)
due to additional false maxima
C)
due to additional false minima
D)
none of the above

Correct Answer :   due to additional false minima

A)
by using probabilistic update
B)
by providing additional units
C)
can be either probabilistic update or using additional units
D)
none of the above

Correct Answer :   by using probabilistic update


Explanation : Hard problem can be solved by additional units not the false minima.

A)
to recall a give set of patterns
B)
to store a given set of patterns
C)
both to store and recall
D)
none of the above

Correct Answer :   both to store and recall


Explanation : The objective of a pattern storage task in a network is to store and recall a given set of patterns.

A)
it should not take place when relations are disturbed
B)
there is no such objective of recall, it depends on the system
C)
Both (A) and (B)
D)
it should take place when relations are slightly disturbed

Correct Answer :   it should take place when relations are slightly disturbed


Explanation : The pattern recall should take place even though features and their spatial relations are slightly disturbed due to noise.

A)
by a feedforward network consisting of processing units with linear output functions
B)
by a feedback network consisting of processing units with linear output functions
C)
by a feedback network consisting of processing units with non linear output functions
D)
by a feedforward network consisting of processing units with non linear output functions

Correct Answer :   by a feedback network consisting of processing units with linear output functions


Explanation : Pattern storage task generally accomplished by a feedback network consisting of processing units with non linear output functions.

A)
activation dynamics
B)
synaptic dynamics
C)
both activation and synaptic dynamics
D)
none of the above

Correct Answer :   activation dynamics


Explanation : The trajectory of the state is determined by activation dynamics.

A)
sates at energy minima
B)
states at energy maxima
C)
just a state of the network
D)
none of the above

Correct Answer :   none of the above


Explanation : The term trajectory of states means state of the network at successive instants of time.

A)
network states
B)
network parameters
C)
Both (A) and (B)
D)
None of the Above

Correct Answer :   Both (A) and (B)

A)
leads to small deviations
B)
leads to fluctuation around
C)
may lead to deviation or fluctuation depends on external noise
D)
none of the above

Correct Answer :   leads to small deviations


Explanation : Basins of attraction in energy landscape leads to small deviations.

A)
number of inputs it can take
B)
number of output it can deliver
C)
number of patterns that can be stored
D)
none of the above

Correct Answer :   number of patterns that can be stored


Explanation : The capacity of a network is the number of patterns that can be stored.

A)
dependent
B)
independent
C)
dependent or independent
D)
none of the above

Correct Answer :   independent


Explanation : Number of desired patterns is independent of basins of attraction.

A)
false wells
B)
storage problem becomes hard problem
C)
no storage and recall can take place
D)
none of the above

Correct Answer :   false wells


Explanation : False wells are created when number of patterns is less than number of basins of attraction.

A)
when number of patterns is same as number of basins of attraction
B)
when number of patterns is less than number of basins of attraction
C)
when number of patterns is more than number of basins of attraction
D)
none of the above

Correct Answer :   when number of patterns is less than number of basins of attraction


Explanation : False wells are created when number of patterns is less than number of basins of attraction.

A)
when number of patterns is less than number of basins of attraction
B)
when number of patterns is same as number of basins of attraction
C)
Both (A) and (B)
D)
when number of patterns is more than number of basins of attraction

Correct Answer :   when number of patterns is more than number of basins of attraction


Explanation : When number of patterns is more than number of basins of attraction then storage problem becomes hard problem.

A)
all units are updated simultaneously
B)
a predefined unit is selected and its new state is computed
C)
a unit is selected at random and its new state is computed
D)
none of the above

Correct Answer :   a unit is selected at random and its new state is computed


Explanation : In asynchronous update, a unit is selected at random and its new state is computed.

A)
current sate
B)
next state
C)
both current and next state
D)
none of the above

Correct Answer :   current sate


Explanation : Stable state should have updated value of current sate.

A)
false wells
B)
fluctuations in energy landscape
C)
Both (A) and (B)
D)
basins of attraction corresponding to energy minimum

Correct Answer :   basins of attraction corresponding to energy minimum

A)
symmetry of weights
B)
asynchronous update
C)
symmetry of weights and asynchronous update
D)
none of the above

Correct Answer :   symmetry of weights and asynchronous update


Explanation : For analysis of storage capacity, symmetry of weights and asynchronous update conditions are imposed on hopfield model.

A)
sigma network
B)
stochastic network
C)
hopfield network
D)
None of the above

Correct Answer :   stochastic network


Explanation : This is the basic equation of a stochastic network.

A)
Static
B)
Dynamic
C)
Neutral
D)
None of the above

Correct Answer :   Static


Explanation : In case of deterministic update, static equilibrium is reached.

A)
Static
B)
Dynamic
C)
Neutral
D)
None of the above

Correct Answer :   Dynamic


Explanation : In case of stochastic update, dynamic equilibrium is reached.

A)
Maximum field approximation
B)
Median field approximation
C)
Minimum field approximation
D)
None of the Above

Correct Answer :   None of the Above


Explanation : It is known as mean field approximation.

A)
At any temperature
B)
At critical temperature
C)
Above critical temperature
D)
Below critical temperature

Correct Answer :   Below critical temperature


Explanation : Stochastic network exhibits stable states below critical temperature.

A)
Patterns are stored appropriately
B)
Inputs are captured appropriately
C)
Weights are chosen appropriately
D)
None of the Above

Correct Answer :   Weights are chosen appropriately


Explanation : Probability of error in recall of stored patterns can be reduced if weights are chosen appropriately.

A)
Determining structure
B)
Determining future inputs
C)
Determining desired outputs
D)
None of the Above

Correct Answer :   None of the Above


Explanation : Pattern environment is useful for determining weights.

A)
Directly
B)
Inversely
C)
No relation
D)
Directly or inversely

Correct Answer :   Directly


Explanation : Energy minima is directly related to probability of occurrence of corresponding patterns in the environment.

A)
Extra stable states
B)
Spurious stable states
C)
Approximation in pattern environment representation
D)
All of the Above

Correct Answer :   All of the Above


Explanation : These all are the primary reasons for existence of non zero probability of error.

A)
feedback manner
B)
feedforward manner
C)
feedforward or feedback
D)
feedforward and feedback

Correct Answer :   feedforward manner


Explanation : The output of input layer is given to second layer with adaptive feedforward weights.

A)
Input layer
B)
Second layer
C)
Both Input and Second layer
D)
None of the Above

Correct Answer :   Input layer


Explanation : Second layer has weights which gives feedback to the layer itself.

A)
Self inhibitory
B)
Self excitatory
C)
Self excitatory or self inhibitory
D)
None of the Above

Correct Answer :   Self excitatory


Explanation : The output of each unit in second layer is fed back to itself in self – excitatory manner.

A)
feedback paths
B)
feedforward paths
C)
either feedforward or feedback
D)
combination of feedforward and feedback

Correct Answer :   combination of feedforward and feedback


Explanation : Competitive learning neural networks is a combination of feedforward and feedback connection layers resulting in some kind of competition.

A)
Self inhibitory
B)
Self excitatory
C)
Self organization
D)
None of the Above

Correct Answer :   Self organization


Explanation : Competitive network that can perform feature mapping can be called as self organization network.

A)
Receives inputs from all others
B)
Gives output to all others
C)
May receive or give input or output to others
D)
None of the above

Correct Answer :   Receives inputs from all others


Explanation : An instar receives inputs from all other input units.

A)
such that it moves towards the input vector
B)
such that it moves away from input vector
C)
such that it moves away from output vector
D)
such that it moves towards the output vector

Correct Answer :   such that it moves towards the input vector


Explanation : Weight vector is adjusted such that it moves towards the input vector.

A)
w(t + 1) = w(t)
B)
w(t + 1) = w(t) + del.w(t)
C)
w(t + 1) = w(t) – del.w(t)
D)
None of the above

Correct Answer :   w(t + 1) = w(t) + del.w(t)


Explanation : The update in weight vector in basic competitive learning can be represented by w(t + 1) = w(t) + del.w(t).

A)
Geometrical arrangement
B)
Significance attached to neighbouring units
C)
Nonlinear units
D)
None of the above

Correct Answer :   None of the above


Explanation : Both the geometrical arrangement and significance attached to neighbouring units make it distinct

A)
This technique allows incremental learning
B)
Error is defined as MSE between neurons net input and its desired output
C)
Uses gradient descent to determine the weight vector that leads to minimal error
D)
All of the Above

Correct Answer :   All of the Above


Explanation : Incremental learning means refining of the weights as more training samples are added, rest are basic statements that defines adaline learning.

A)
0
B)
1
C)
0 or 1
D)
None of the Above

Correct Answer :   None of the Above


Explanation : The weight change in plain hebbian learning can never be zero.

A)
Divergent
B)
Convergent
C)
May be convergent or divergent
D)
None of the Above

Correct Answer :   Divergent


Explanation : In plain hebbian learning weights keep growing without bound.

A)
Feed layer
B)
Feedback layer
C)
Competitive layer
D)
No such name exist

Correct Answer :   Competitive layer


Explanation : Feedback layer in competitive neural networks is also known as competitive layer.

A)
Inhibitory to self and others
B)
Self excitatory to self and others
C)
Self excitatory to self and inhibitory to others
D)
Inhibitory to self and excitatory to others

Correct Answer :   Self excitatory to self and inhibitory to others


Explanation : The second layer of competitive networks have self excitatory to self and inhibitory to others feedbacks to make it competitive.

A)
Heteroassociative memory
B)
Autoassoiative memory
C)
Temporal associative memory
D)
Multidirectional assocative memory

Correct Answer :   Autoassoiative memory


Explanation : If the weight matrix stores the given patterns, then the network becomes autoassoiative memory.

A)
Autoassoiative memory
B)
Heteroassociative memory
C)
Temporal associative memory
D)
Multidirectional assocative memory

Correct Answer :   Autoassoiative memory


Explanation : If the weight matrix stores the given patterns, then the network becomes multidirectional assocative memory.

A)
Unidirectional memory
B)
Bidirectional memory
C)
Temporal associative memory
D)
Multidirectional assocative memory

Correct Answer :   Bidirectional memory


Explanation : Heteroassociative memory is also known as bidirectional memory.

A)
To store pattern pairs
B)
To recall pattern pairs
C)
To store a set of pattern pairs and they can be recalled by giving either of pattern as input
D)
None of the Above

Correct Answer :   To store a set of pattern pairs and they can be recalled by giving either of pattern as input


Explanation : The objective of BAM i.e Bidirectional Associative Memory, is to store a set of pattern pairs and they can be recalled by giving either of pattern as input.

A)
Wall following
B)
Wall climbing
C)
Gesture control
D)
Rotating arm and legs

Correct Answer :   Wall following


Explanation : Wall following is a simple task and doesn’t require any feedback.

A)
Pattern mapping
B)
Vector quantization
C)
Control applications
D)
Pattern classification

Correct Answer :   Pattern classification


Explanation : Its is the most direct and multilayer feedforward networks became popular because of this.

A)
They have more tolerance
B)
They have ability to learn b examples
C)
They have real time high computational rates
D)
All of the above

Correct Answer :   All of the above


Explanation : Because of their parallel structure, they have high computational rates than conventional computers, so all are true.

A)
Association
B)
Classification
C)
Pattern storage
D)
None of the above

Correct Answer :   Classification


Explanation : Hamming network performs template matching between stored templates and inputs.

A)
Greater the degradation less is the activation value of other units
B)
Greater the degradation more is the activation value of other units
C)
Greater the degradation less is the activation value of winning units
D)
Greater the degradation more is the activation value of winning units

Correct Answer :   Greater the degradation less is the activation value of winning units


Explanation : Simply, greater the degradation less is the activation value of winning units.

A)
Noise immunity
B)
Dissimilarity of input pattern with patterns stored
C)
Similarity of input pattern with patterns stored
D)
None of the above

Correct Answer :   Similarity of input pattern with patterns stored


Explanation : Matching score is simply a indicative of similarity of input pattern with patterns stored.

A)
To realize structure of MLP
B)
To solve pattern mapping problem
C)
To solve pattern classification problem
D)
To realize an approximation to a MLP

Correct Answer :   To realize an approximation to a MLP


Explanation : MLFFNN stands for multilayer feedforward network and MLP stands for multilayer perceptron.

A)
Training of basis function is faster than MLFFNN
B)
Storing in basis function is faster than MLFFNN
C)
Training of basis function is slower than MLFFNN
D)
None of the above

Correct Answer :   Training of basis function is faster than MLFFNN


Explanation : The main advantage of basis function is that the training of basis function is faster than MLFFNN.

A)
Because they are developed specifically for pattern classification
B)
Because they are developed specifically for pattern approximation
C)
Because they are developed specifically for pattern approximation or classification
D)
None of the above

Correct Answer :   Because they are developed specifically for pattern approximation or classification


Explanation : Training of basis function is faster than MLFFNN because they are developed specifically for pattern approximation or classification.

A)
Pattern classification task
B)
Function approximation task
C)
Function approximation and pattern classification task
D)
None of the above

Correct Answer :   Function approximation task


Explanation : GRNN stand for Generalized Regression Neural Networks.

A)
Function approximation task
B)
Pattern classification task
C)
Function approximation and pattern classification task
D)
None of the above

Correct Answer :   Pattern classification task


Explanation : PNN stand for Probabilistic Neural Networks.

A)
Pattern clustering
B)
Patter approximation
C)
Pattern classification
D)
Pattern mapping

Correct Answer :   Pattern mapping


Explanation : CPN i.e counterpropagation network provides a practical approach for implementing pattern mapping.

A)
Its ability to learn inverse mapping functions
B)
Its ability to learn forward mapping functions
C)
Its ability to learn forward and inverse mapping functions
D)
None of the above

Correct Answer :   Its ability to learn forward and inverse mapping functions


Explanation : Counter propagation network has ability to learn forward and inverse mapping functions.

A)
Network in neural which contains feedback
B)
Network in neural which no loops
C)
Network in neural which contains loops
D)
None of the above

Correct Answer :   Network in neural which contains feedback


Explanation : An auto – associative network contains feedback.

A)
Outputs a real number between 0 and 1
B)
They are the most common type of neurons
C)
Can accept any vectors of real numbers as input
D)
All of the above

Correct Answer :   All of the above


Explanation : These all statements itself defines sigmoidal neurons.

A)
Automatic Resonance Theory
B)
Adaptive Resonance Theory
C)
Artificial Resonance Theory
D)
None of the Above

Correct Answer :   Adaptive Resonance Theory


Explanation : ART stand for Adaptive Resonance Theory.

A)
Number of desired outputs
B)
Number of possible outputs
C)
Number of acceptable inputs
D)
None of the above

Correct Answer :   None of the above


Explanation : Vigilance parameter in ART determines the tolerance of matching process.

A)
binary
B)
bipolar
C)
both bipolar and binary
D)
none of the above

Correct Answer :   binary


Explanation : Adaptive resonance theory take care of stability plasticity dilemma.

A)
No change
B)
Bigger clusters
C)
Small clusters
D)
None of the above

Correct Answer :   Small clusters


Explanation : Input samples associated with same neuron get reduced.