There are **three types of Tensors** used to create neural network models:

**Constant Tensor : **Constant Tensors are used as constants, as the name suggests. They create a node that takes a value and does not change it. A constant can be created using **tf.constant**

.

`tf.constant(value, dtype=None, shape=None, name='Const', verify_shape=False)`

It accepts the five arguments.

**Variable Tensor : **Variable Tensors are the nodes which provide their current value as output. It means that they can retain their value over multiple executions of a graph.

**Place Holder Tensor : **Placeholders Tensors are essential than variables. These are used to assign data in a later time. Placeholders are the nodes whose value is fed at the time of execution. Assume, we have inputs to our network which are dependent on some external data. Also, we do not want our graph to depend on any real value while developing the graph, then Placeholders are useful datatype. We can even build a graph without any data.

Therefore, placeholders do not require any initial value. They only need a datatype (such as float32) and a tensor shape, so the graph still knows what to compute with even though it does not have any stored values.