Google News
logo
TensorFlow.js Interview Questions
You can use many optimizers based on various factors, such as the learning rate, performance metric, dropout, gradient, and more.
 
Following are some of the popular optimizers :
 
* Adam
*
AdaDelta
* AdaGrad
* RMSprop
* Momentum
* Stochastic Gradient Descent
The Word2vec algorithm is used to compute the vector representations of words from an input dataset.
 
There are six parameters that have to be considered :
 
* embedding_size : Denotes the dimension of the embedding vector

* min_occurrence : Removes all words that do not appear at least ‘n’ number of times

* max_vocabulary_size : Denotes the total number of unique words in the vocabulary

* num_skips : Denotes the number of times you can reuse an input to generate a label

* num_sampled : Denotes the number of negative examples to sample from the input

* skip_window : Denotes words to be considered or not for processing
Rectified Linear Unit Layer acts as an activation layer which activates the function having a value above a specific unit. It replaces the negative values in an image with zero, defining a linear relationship of the variable with the input. It makes the input invariant to noise; hence it is known as subsampling.
Precision and Recall are the performance metrics i.e., they give insight about the model performance.
 
Precision : The quotient of true result to the actual result. It gives the percentage of true positive to the sum of a true positive and false positive.

Recall : It is the quotient of true results to the predicted result. It gives the percentage of true positive to the sum of a true positive and false negative.
In the process of word embedding, the text is converted to a vector using the hashing trick. The hash function assigns words to a hashing space having an index. The hash function takes five parameters named as text, n, hash_function, filter, lower, and split.
The tf.backend() function is used to get the current backend of the current browser.
 
Syntax :
tf.backend()
Parameters : It does not accept any parameter.
 
Return Value : It returns KernalBackend.
* TensorFlow Visor is a graphic tools for visualizing Machine Learning
* Often called tfjs-vis
* It contains functions for visualizing TensorFlow Models
* Visualizations can be organized in Visors (modal browser windows)
* Can be used with Custom Tools likes d3, Chart.js, and Plotly.js
 
 
Using tfjs-vis : To use tfjs-vis, add the following script tag to your HTML file(s):
 
Example :
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-vis"></script>
 
Example with a Visor : 
<!DOCTYPE html>
<html>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-vis"></script>
<body>

<h2>TensorFlow Visor</h2>

<script>

const series = ['First', 'Second'];

const serie1 = []; 
const serie2 = [];
for (let i = 0; i < 100; i++) {
  serie1[i] = {x:i, y:Math.random() * 100};
  serie2[i] = {x:i, y:Math.random() * 100};
}

const data = {values: [serie1, serie2], series}

tfvis.render.scatterplot({name: "my Plots"}, data);

</script>
</body>
</html>

Output : 
Visor

For performing linear regression, we will do the following :
 
1. Create the linear regression computational graph output. This means we will accept an input, x, and generate the output, Ax + b.
 
2. We create a loss function, the L2 loss, and use that output with the learning rate to compute the gradients of the model variables, A and B to minimize the loss.
Import tensorflow as tf
# Creating variable for parameter slope (W) with initial value as 0.4
W = tf.Variable([.4], tf.float32)
#Creating variable for parameter bias (b) with initial value as -0.4
b = tf.Variable([-0.4], tf.float32)
# Creating placeholders for providing input or independent variable, denoted by x
x = tf.placeholder(tf.float32)
# Equation of Linear Regression
linear_model = W * x + b
# Initializing all the variables
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
# Running regression model to calculate the output w.r.t. to provided x values
print(sess.run(linear_model {x: [1, 2, 3, 4]})) 
Below is the implementation for KNN algorithm, the tensorflow way.
import numpy as np
import tensorflow as tf
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# In this example, we limit mnist data
Xtrain, Ytrain = mnist.train.next_batch(5000) #5000 for training (nn candidates)
Xtest, Ytest = mnist.test.next_batch(200) #200 for testing
# tf Graph Input
xtrain = tf.placeholder("float", [None, 784])
xtest = tf.placeholder("float", [784])
# Nearest Neighbor calculation using L1 Distance
# Calculate L1 Distance
distance = tf.reduce_sum(tf.abs(tf.add(xtrain, tf.negative(xtest))), reduction_indices=1)
# Prediction: Get min distance index (Nearest neighbor)
pred = tf.argmin(distance, 0)
accuracy = 0.
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
   sess.run(init)
   # loop over test data
   for i in range(len(Xtest)):
       # Get nearest neighbor
       nn_index = sess.run(pred, feed_dict={xtrain: Xtrain, xtest: Xtest[i, :]})
     # Get nearest neighbor class label and compare it to its true label
       print "Test", i, "Prediction:", np.argmax(Ytrain[nn_index]), \
           "True Class:", np.argmax(Ytest[i])
       # Calculate accuracy
       if np.argmax(Ytrain[nn_index]) == np.argmax(Ytest[i]):
           accuracy += 1./len(Xtest)
   print "Accuracy:", accuracy