Correct Answer : Warren Sturgis McCulloch
Explanation : Neural networks were first proposed in 1944 by Warren Sturgis McCulloch and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what's sometimes called the first cognitive science department.
Correct Answer : All of the Above
Explanation : These are the basic aims that a neural network achieve.
Correct Answer : to bring computer more & more closer to user
Explanation : Software should be more interactive to the user, so that it can understand its problem in a better fashion.
Correct Answer : serial or parallel
Explanation : General characteristics of neural networks.
Correct Answer : pattern classification
Explanation : Memory is addressable, so thus pattern can be easily classified.
Correct Answer : both associative & distributive
Correct Answer : McCulloch-pitts neuron model
Explanation : McCulloch-pitts neuron model can perform weighted sum of inputs followed by threshold logic operation.
Correct Answer : human perceive everything as a pattern while machine perceive it merely as data
Explanation : Humans have emotions & thus form different patterns on that basis, while a machine(say computer) is dumb & everything is just a data for him.
Correct Answer : related to storage & recall task
Explanation : This is the basic definition of auto-association in neural networks.
Correct Answer : adaptive linear element
Correct Answer : Both First & Second
Correct Answer : 2
Correct Answer : Recurrent Neural Network
Correct Answer : Neither feature & nor number of groups is known
Correct Answer : Voice recognition
Explanation : Since same vowel may occur in different context & its features vary over overlapping regions of different vowels.
Correct Answer : Highly restricted
Explanation : Point to point pattern matching is carried out in the process.
Correct Answer : Input pattern keeps on changing
Explanation : Dynamic nature of input patterns in an AI(Artificial Intelligence) problem.
Correct Answer : Dynamic inputs & categorization can’t be handled
Explanation : If system is allowed to change its categorization according to inputs it cannot be used for patterns classification & assessment.
Correct Answer : Boltzman machine
Explanation : Ackley, Hinton built the Boltzman machine.
Correct Answer : Marvin Minsky
Explanation : In 1954 Marvin Minsky developed the first learning machine in which connection strengths could be adapted automatically & efficiebtly.
Correct Answer : Rosenblatt
Explanation : Rosenblatt proposed the first perceptron model in 1958 .
Correct Answer : Energy analysis
Explanation : It was of major contribution of his works in 1982.
Explanation : AI network should be all of the above mentioned.
Correct Answer : Neuron
Explanation : Neuron is the most basic & fundamental unit of a network .
Correct Answer : Fibers of nerves
Correct Answer : tree
Correct Answer : Chemical Process
Correct Answer : 10-80
Explanation : Average size of neuron body lies in the above limit.
Correct Answer : 200
Correct Answer : Negative
Explanation : It is due to the presence of potassium ion on outer surface in neural fluid.
Correct Answer : Potassium
Explanation : Potassium is the main constituent of neural liquid & responsible for potential on neuron body.
Correct Answer : -70mv
Explanation : It is a basic fact, founded out by series of experiments conducted by neural scientist.
Correct Answer : 0.5-2m/s
Explanation : The process is very fast but comparable to the length of neuron.
Correct Answer : Regenerate & retain its original capacity
Correct Answer : -60mv
Explanation : Cell membrane looses it impermeability against Na+ ions at -60mv.
Correct Answer : transmission
Explanation : Axon is the body of neuron & thus cant be at ends of it so cant receive & transmit signals.
Correct Answer : Summing
Explanation : Because adding of potential(due to neural fluid) at different parts of neuron is the reason of its firing.
Correct Answer : 10mv
Explanation : This critical is founded by series of experiments conducted by neural scientist.
Correct Answer : Hebb rule learning
Correct Answer : The strength of neural connection get modified accordingly
Explanation : The strength of neuron to fire in future increases, if it is fired repeatedly.
Correct Answer : 10¹¹
Correct Answer : 15*(10⁴)
Explanation : These are all fundamental reasons, why can’t we design a perfect neural network !
Correct Answer : 10¹⁵
Explanation : You can estimate this value from number of neurons in human cortex & their density.
Correct Answer : Adaptive resonance theory
Correct Answer : Excitatory input
Explanation : Sign convention of neuron.
Correct Answer : Inhibitory input
Correct Answer : Weight
Explanation : Activation is sum of wieghted sum of inputs, which gives desired output..hence output depends on weights.
Explanation : The perceptron is one of the earliest neural networks. Invented at the Cornell Aeronautical Laboratory in 1957 by Frank Rosenblatt, the Perceptron was an attempt to understand human memory, learning, and cognitive processes.
Correct Answer : Association unit
Explanation : This was the very speciality of the perceptron model, that is performs association mapping on outputs of he sensory units.
Correct Answer : Learning enabled
Correct Answer : Error due to environmental condition
Explanation : All other parameters are assumed to be null while calculatin the error in perceptron model & only difference between desired & target output is taken into account.
Correct Answer : Both Synchronously & Asynchronously
Explanation : Output can be updated at same time or at different time in the networks.
Correct Answer : Output units are updated sequentially
Explanation : Output are updated at different time in the networks.
Correct Answer : Both learning algorithm & law
Explanation : Basic definition of learning law in neural.
Correct Answer : Widrow
Explanation : Widrow invented the adaline neural model.
Correct Answer : Analog activation value is compared with output
Explanation : Analog activation value comparison with output, instead of desired output as in perceptron model was the main point of difference between the adaline & perceptron model.
Correct Answer : Both LMS error & Gradient descent learning law
Explanation : weight update rule minimizes the mean squared error(delta square), averaged over all inputs & this laws is derived using negative gradient of error surface weight space.
Correct Answer : Both Interlayer and Intralayer
Explanation : Connections between layers can be made to one unit to another and within the units of a layer.
Correct Answer : Either feedforward & feedback
Explanation : Connections across the layers in standard topologies can be in feedforward manner or in feedback manner but not both.
Correct Answer : when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent
Explanation : Restatement of basic definition of instar.
Correct Answer : when weight vector for connections from jth unit (say) in F2 approaches the activity pattern in F1(comprises of input vector)
Explanation : Restatement of basic definition of outstar.
Correct Answer : Short Term Memory
Explanation : Full form of Short Term Memory(STM).
Correct Answer : Activation state of network
Correct Answer : Encoded pattern information pattern in synaptic weights
Explanation : Long-Term Memory (LTM-the encoding and retention of an effectively unlimited amount of information for a much longer period of time) & hence the option.
Explanation : Change in weight vector corresponding to jth input at time (t+1) depends on all of these parameters.
Correct Answer : Both (A) and (B)
Explanation : (si)= f(wi a), in Hebb’s law
Correct Answer : delta learning law
Explanation : Output function in this law is assumed to be linear , all other things same.
Correct Answer : LMS
Explanation : LMS, least mean square. Change in weight is made proportional to negative gradient of error & due to linearity of output function.
Correct Answer : ∆wij= µ(bi – si) aj
Explanation : Perceptron learning law is supervised, nonlinear type of learning.
Correct Answer : Supervised
Explanation : Supervised, since depends on target output.
Correct Answer : ∆wk= µ (a-wk), unit k with maximum output is identified
Explanation : Follows from basic definition of instar learning law.
Correct Answer : synaptic dynamics
Explanation : Weights are best determined by synaptic dynamics, as it is one fastest & precise dynamics occurring.
Correct Answer : Neural level
Correct Answer : short term memory
Explanation : It depends on input pattern, & input changes from moment to moment, hence Short term memory.
Correct Answer : the ability of a pattern recognition system to approximate the desired output values for pattern vectors which are not in the training set.
Correct Answer : activation
Explanation : Activation dynamics depends on input pattern, hence any change in input pattern will affect activation dynamics of neural networks.
Correct Answer : None of the Above
Explanation : It is due to the limited current carrying capacity of cell membrane.
Correct Answer : How can a neuron with limited operating range be made sensitive to nearly unlimited range of inputs
Explanation : Threshold value setting has to be adjusted properly.
Explanation : There exist broadly structural & global stability in neural.
Explanation : Refers to state equilibrium situation where small perturbations brings network back to equilibrium.
Correct Answer : When both synaptic & activation dynamics are simultaneously used & are in equilibrium
Explanation : Global stability means neuron as a whole is stable.
Correct Answer : To keep operating range of activation value to a specified range
Explanation : Stabilizing & bounding the unbounded range of activation value was the primary goal of this model.
Correct Answer : xb(t)=deterministic model + noise component
Explanation : Noise is assumed to be additive in nature in stochastic models.
Correct Answer : Settlement of network, when small perturbations occur
Explanation : Follows from basic definition of equilibrium.
Correct Answer : xb(t)=n(t), where n is noise component
Explanation : xb(t)=0 is condition for deterministic models, so option c is radical choice.
Correct Answer : Update to all units is done at the same time
Explanation : In asynchronous update, change in state of any one unit drive the whole network.
Explanation : These all are the some of basic requirements of learning laws.
Correct Answer : Short term memory in general
Explanation : Memory decay affects short term memory rather than older memories.
Correct Answer : it is distributed all across the weights
Explanation : pattern information is highly distributed all across the weights.
Correct Answer : synaptic strength is proportional to correlation between firing of post & presynaptic neuron
Correct Answer : learning laws which modulate difference between synaptic weight & output signal
Explanation : Competitive learning laws modulate difference between synaptic weight & output signal.
Correct Answer : synaptic strength is proportional to changes in correlation between firing of post & presynaptic neuron
Explanation : Differential hebbian learning is proportional to changes in correlation between firing of post & presynaptic neuron.
Explanation : Differential competitive learning is based on to changes of postsynaptic neuron only.
Correct Answer : learning is based on evaluative signal
Explanation : Reinforcement learning is based on evaluative signal.
Correct Answer : To determine stability
Correct Answer : v(x) <=0
Explanation : It is the condition for existence for lyapunov function.
Correct Answer : shows the stability of fixed weight autoassociative networks
Explanation : Cohen grossberg theorem shows the stability of fixed weight autoassociative networks.
Correct Answer : Shows the stability of adaptive autoaassociative networks
Explanation : Cohen grossberg kosko shows the stability of adaptive autoaassociative networks.
Correct Answer : both process has to happen
Explanation : Follows from basic definition of Recall in a network.
Explanation : Feedforward networks are used for pattern mapping, pattern association, pattern classification.
Correct Answer : genaralization
Explanation : The network for pattern mapping is expected to perform genaralization.
Correct Answer : hebb learning law
Explanation : For orthogonal input vectors, Hebb learning law is best suited.
Correct Answer : widrow learning law
Explanation : For linear input vectors, widrow learning law is best suited.
Correct Answer : all of the above
Explanation : Affine transformations can be used to do arbitrary rotation, scaling, translation.
Correct Answer : addition of bias term (-1) which results in arbitrary rotation, scaling, translation of input pattern
Correct Answer : number of distinct classes
Explanation : Number of output cases depends on number of distinct classes.
Correct Answer : adjust weight along with class identification
Correct Answer : no adjustments in weight is done
Explanation : No adjustments in weight is done, since input has been correctly classified which is the objective of the system.
Correct Answer : there is only one straight line that separates them
Explanation : Linearly separable classes, functions can be separated by a line.
Correct Answer : both binary and bipolar
Explanation : The perceptron convergence theorem is applicable for both binary and bipolar input, output data.
Correct Answer : when no restrictions such as linear separability is placed on the set of input – output pattern pairs
Correct Answer : except input layer, all units in other layers should be non – linear
Explanation : To provide generalization capability to a network, except input layer, all units in other layers should be non – linear.
Correct Answer : overall number of units in hidden layers
Explanation : The nature of mapping problem decides overall number of units in hidden layers.
Correct Answer : using nonlinear differentiable output function for output and hidden layers
Explanation : Hard learning problem is solved by using nonlinear differentiable output function for output and hidden layers.
Correct Answer : the overall characteristics of the mapping problem
Explanation : The number of units in hidden layers depends on the overall characteristics of the mapping problem.
Correct Answer : problem is autoassociation
Explanation : When a(l)=b(l) problem is classified as autoassociation.
Correct Answer : both input & design
Explanation : The recalled output in pattern association problem depends on both input & design of network.
Correct Answer : network exhibits interpolative behaviour
Explanation : This follows from basic definition in neural.
Correct Answer : to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly
Explanation : The objective of backpropagation algorithm is to to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly.
Explanation : These all statements defines backpropagation algorithm.
Explanation : These all are limitations of backpropagation algorithm in general.
Correct Answer : on basis of average gradient value
Explanation : If average gadient value fall below a preset threshold value, the process may be stopped.
Correct Answer : pattern storage
Explanation : By using non – linear output function for each processing unit, a feedback network can be used for pattern storage.
Correct Answer : both number of units and strength of connecting links
Explanation : The number of patterns that can be stored in a given network depends on number of units and strength of connecting links.
Correct Answer : error in recall
Explanation : Due to additional false minima, there is error in recall.
Correct Answer : due to additional false minima
Correct Answer : by using probabilistic update
Explanation : Hard problem can be solved by additional units not the false minima.
Correct Answer : both to store and recall
Explanation : The objective of a pattern storage task in a network is to store and recall a given set of patterns.
Correct Answer : it should take place when relations are slightly disturbed
Explanation : The pattern recall should take place even though features and their spatial relations are slightly disturbed due to noise.
Correct Answer : by a feedback network consisting of processing units with linear output functions
Explanation : Pattern storage task generally accomplished by a feedback network consisting of processing units with non linear output functions.
Correct Answer : activation dynamics
Explanation : The trajectory of the state is determined by activation dynamics.
Correct Answer : none of the above
Explanation : The term trajectory of states means state of the network at successive instants of time.
Correct Answer : leads to small deviations
Explanation : Basins of attraction in energy landscape leads to small deviations.
Correct Answer : number of patterns that can be stored
Explanation : The capacity of a network is the number of patterns that can be stored.
Correct Answer : independent
Explanation : Number of desired patterns is independent of basins of attraction.
Correct Answer : false wells
Explanation : False wells are created when number of patterns is less than number of basins of attraction.
Correct Answer : when number of patterns is less than number of basins of attraction
Correct Answer : when number of patterns is more than number of basins of attraction
Explanation : When number of patterns is more than number of basins of attraction then storage problem becomes hard problem.
Correct Answer : a unit is selected at random and its new state is computed
Explanation : In asynchronous update, a unit is selected at random and its new state is computed.
Correct Answer : current sate
Explanation : Stable state should have updated value of current sate.
Correct Answer : basins of attraction corresponding to energy minimum
Correct Answer : symmetry of weights and asynchronous update
Explanation : For analysis of storage capacity, symmetry of weights and asynchronous update conditions are imposed on hopfield model.
Correct Answer : stochastic network
Explanation : This is the basic equation of a stochastic network.
Correct Answer : Static
Explanation : In case of deterministic update, static equilibrium is reached.
Correct Answer : Dynamic
Explanation : In case of stochastic update, dynamic equilibrium is reached.
Explanation : It is known as mean field approximation.
Correct Answer : Below critical temperature
Explanation : Stochastic network exhibits stable states below critical temperature.
Correct Answer : Weights are chosen appropriately
Explanation : Probability of error in recall of stored patterns can be reduced if weights are chosen appropriately.
Explanation : Pattern environment is useful for determining weights.
Correct Answer : Directly
Explanation : Energy minima is directly related to probability of occurrence of corresponding patterns in the environment.
Explanation : These all are the primary reasons for existence of non zero probability of error.
Correct Answer : feedforward manner
Explanation : The output of input layer is given to second layer with adaptive feedforward weights.
Correct Answer : Input layer
Explanation : Second layer has weights which gives feedback to the layer itself.
Correct Answer : Self excitatory
Explanation : The output of each unit in second layer is fed back to itself in self – excitatory manner.
Correct Answer : combination of feedforward and feedback
Explanation : Competitive learning neural networks is a combination of feedforward and feedback connection layers resulting in some kind of competition.
Correct Answer : Self organization
Explanation : Competitive network that can perform feature mapping can be called as self organization network.
Correct Answer : Receives inputs from all others
Explanation : An instar receives inputs from all other input units.
Correct Answer : such that it moves towards the input vector
Explanation : Weight vector is adjusted such that it moves towards the input vector.
Correct Answer : w(t + 1) = w(t) + del.w(t)
Explanation : The update in weight vector in basic competitive learning can be represented by w(t + 1) = w(t) + del.w(t).
Correct Answer : None of the above
Explanation : Both the geometrical arrangement and significance attached to neighbouring units make it distinct
Explanation : Incremental learning means refining of the weights as more training samples are added, rest are basic statements that defines adaline learning.
Explanation : The weight change in plain hebbian learning can never be zero.
Correct Answer : Divergent
Explanation : In plain hebbian learning weights keep growing without bound.
Correct Answer : Competitive layer
Explanation : Feedback layer in competitive neural networks is also known as competitive layer.
Correct Answer : Self excitatory to self and inhibitory to others
Explanation : The second layer of competitive networks have self excitatory to self and inhibitory to others feedbacks to make it competitive.
Correct Answer : Autoassoiative memory
Explanation : If the weight matrix stores the given patterns, then the network becomes autoassoiative memory.
Explanation : If the weight matrix stores the given patterns, then the network becomes multidirectional assocative memory.
Correct Answer : Bidirectional memory
Explanation : Heteroassociative memory is also known as bidirectional memory.
Correct Answer : To store a set of pattern pairs and they can be recalled by giving either of pattern as input
Explanation : The objective of BAM i.e Bidirectional Associative Memory, is to store a set of pattern pairs and they can be recalled by giving either of pattern as input.
Correct Answer : Wall following
Explanation : Wall following is a simple task and doesn’t require any feedback.
Correct Answer : Pattern classification
Explanation : Its is the most direct and multilayer feedforward networks became popular because of this.
Correct Answer : All of the above
Explanation : Because of their parallel structure, they have high computational rates than conventional computers, so all are true.
Correct Answer : Classification
Explanation : Hamming network performs template matching between stored templates and inputs.
Correct Answer : Greater the degradation less is the activation value of winning units
Explanation : Simply, greater the degradation less is the activation value of winning units.
Correct Answer : Similarity of input pattern with patterns stored
Explanation : Matching score is simply a indicative of similarity of input pattern with patterns stored.
Correct Answer : To realize an approximation to a MLP
Explanation : MLFFNN stands for multilayer feedforward network and MLP stands for multilayer perceptron.
Correct Answer : Training of basis function is faster than MLFFNN
Explanation : The main advantage of basis function is that the training of basis function is faster than MLFFNN.
Correct Answer : Because they are developed specifically for pattern approximation or classification
Explanation : Training of basis function is faster than MLFFNN because they are developed specifically for pattern approximation or classification.
Correct Answer : Function approximation task
Explanation : GRNN stand for Generalized Regression Neural Networks.
Correct Answer : Pattern classification task
Explanation : PNN stand for Probabilistic Neural Networks.
Correct Answer : Pattern mapping
Explanation : CPN i.e counterpropagation network provides a practical approach for implementing pattern mapping.
Correct Answer : Its ability to learn forward and inverse mapping functions
Explanation : Counter propagation network has ability to learn forward and inverse mapping functions.
Correct Answer : Network in neural which contains feedback
Explanation : An auto – associative network contains feedback.
Explanation : These all statements itself defines sigmoidal neurons.
Correct Answer : Adaptive Resonance Theory
Explanation : ART stand for Adaptive Resonance Theory.
Explanation : Vigilance parameter in ART determines the tolerance of matching process.
Correct Answer : binary
Explanation : Adaptive resonance theory take care of stability plasticity dilemma.
Correct Answer : Small clusters
Explanation : Input samples associated with same neuron get reduced.