Google News
logo
Deep Learning - Interview Questions
How is the transformer architecture better than RNNs in Deep Learning?
With the use of sequential processing, programmers were up against :
 
* The usage of high processing power
* The difficulty of parallel execution
* This caused the rise of the transformer architecture. Here, there is a mechanism called attention mechanism, which is used to map all of the dependencies between * sentences, thereby making huge progress in the case of NLP models.
Advertisement