Google News
TensorFlow.js Interview Questions
TensorFlow can run on different platforms :

* Cloud Web Service
* Mobile OS like IOS and Android
* Operating System such as Windows, OS, and Linux
There are many benefits of TensorFlow over other libraries which are given below :
Scalability : TensorFlow provides easily scaled machine learning applications and infrastructure.

Pipelining : TensorFlow's Dataset module is used to build efficient pipelines for images and text.

Visualization of Data : Visualizing the graph is very straight-forward in TensorFlow. TensorBoard(a suite of visualization tools) is used to visualize TensorFlow graphs.

Debugging Facility : tfdbg is a specialized debugger for TensorFlow. It lets us view the internal structure and states of running TensorFlow graphs during training and inference.
There are a few products built using TensorFlow :
* Nsynth
* Giorgio Cam
* Teachable Machine
* Hand Writing Recognition
Tensorflow can also use with containerization tools such as docker, for instance, it could use to deploy a sentiment analysis model which uses character level ConvNet networks for text classification. 
The computation of the difference between the predicted and actual value using a function is known as the loss function. The value of this function defines the amount of difference in values.
Hence at each run, the Gradient Function optimizer checks for the changes which can help to improve the model. With the help of optimizer, the loss reduces to the minimum and attains maximum accuracy.
The TensorFlow system expects all inputs in a dimension. But the dataset we work does not contain significant values. Hence the normalization of data is required. We perform the batch normalization of the data.
data = tf.nn.batch_norm_with_global_normalization()
Estimators provide the following benefits :
You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.

Estimators provide a safe distributed training loop that controls how and when to:

* Load data
* Handle exceptions
* Create checkpoint files and recover from failures
* Save summaries for TensorBoard

When writing an application with Estimators, you must separate the data input pipeline from the model. This separation simplifies experiments with different datasets.
Pre-made Estimators enable you to work at a much higher conceptual level than the base TensorFlow APIs. You no longer have to worry about creating the computational graph or sessions since Estimators handle all the "plumbing" for you. Furthermore, pre-made Estimators let you experiment with different model architectures by making only minimal code changes. tf.estimator.DNNClassifier, for example, is a pre-made Estimator class that trains classification models based on dense, feed-forward neural networks.
A TensorFlow program relying on a pre-made Estimator typically consists of the following four steps :
1. Write an input functions
2. Define the feature columns
3. Instantiate the relevant pre-made Estimator.
4. Call a training, evaluation, or inference method
* Pre-made Estimators encode best practices, providing the following benefits :
* Best practices for determining where different parts of the computational graph should run, implementing strategies on a single machine or on a cluster.

* Best practices for event (summary) writing and universally useful summaries.

* If you don't use pre-made Estimators, you must implement the preceding features yourself.
An epoch, in Machine Learning, is the entire processing by the learning algorithm of the entire train-set.
It defines the total time frame of training the data. Often many epochs run to train the dataset of 10s of thousands of entries. Using many epochs allows us to make a better generalization for the system.