Google News
logo
Large Language Model - Interview Questions
What is the difference between L1 and L2 regularization in LLM?
L1 and L2 regularization are techniques used to prevent overfitting in machine learning models, including large language models.

L1 regularization, also known as Lasso regularization, adds a penalty term to the loss function that is proportional to the sum of the absolute values of the model weights. This encourages the model to learn sparse weights and can help to eliminate irrelevant features. L1 regularization is typically used when the goal is to select a subset of features that are most important for the task.

L2 regularization, also known as Ridge regularization, adds a penalty term to the loss function that is proportional to the sum of the squares of the model weights. This encourages the model to learn small weights and can help to reduce the impact of outliers in the data. L2 regularization is typically used when the goal is to improve the generalization performance of the model.
Advertisement