2. Google
* LaMDA
LaMDA is a family of Transformer-based models that is specialized for dialog. These models have up to 137B parameters and are trained on 1.56T words of public dialog data. LaMBDA can engage in free-flowing conversations on a wide array of topics. Unlike traditional chatbots, it is not limited to pre-defined paths and can adapt to the direction of the conversation.
* BARD
Bard is a chatbot that uses machine learning and natural language processing to simulate conversations with humans and provide responses to questions. It is based on the LaMDA technology and has the potential to provide up-to-date information, unlike ChatGPT, which is based on data collected only up to 2021.
* PaLM 2 (Bison-001)
PaLM 2 is our next generation large language model of 2023 that builds on Google’s legacy of breakthrough research in machine learning and responsible AI.
It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency, and natural language generation better than our previous state-of-the-art LLMs, including PaLM. It can accomplish these tasks because of the way it was built – bringing together compute-optimal scaling, an improved dataset mixture, and model architecture improvements.
PaLM 2 is grounded in Google’s approach to building and deploying AI responsibly. It was evaluated rigorously for its potential harms and biases, capabilities and downstream uses in research and in-product applications. It’s being used in other state-of-the-art models, like Med-PaLM 2 and Sec-PaLM, and is powering generative AI features and tools at Google, like Bard and the PaLM API.

Google has focused on commonsense reasoning, formal logic, mathematics, and advanced coding in 20+ languages on the PaLM 2 model. It’s being said that the largest PaLM 2 model has been trained on 540 billion parameters and has a maximum context length of 4096 tokens.
Google has announced four models based on PaLM 2 in different sizes (Gecko, Otter, Bison, and Unicorn). Of which, Bison is available currently, and it scored 6.40 in the MT-Bench test whereas GPT-4 scored a whopping 8.99 points.
* mT5
Multilingual T5 (mT5) is a text-to-text transformer model consisting of 13B parameters. It is trained on the mC4 corpus, covering 101 languages like Amharic, Basque, Xhosa, Zulu, etc. mT5 is capable of achieving state-of-the-art performance on many cross-lingual NLP tasks.
* Claude v1
In case you are unaware, Claude is a powerful LLM developed by
Anthropic
, which has been backed by Google. It has been co-founded by former OpenAI employees and its approach is to build AI assistants which are helpful, honest, and harmless. In multiple benchmark tests, Anthropic’s Claude v1 and Claude Instant models have shown great promise. In fact, Claude v1 performs better than PaLM 2 in MMLU and MT-Bench tests.
It’s close to GPT-4 and scores 7.94 in the MT-Bench test whereas GPT-4 scores 8.99. In the MMLU benchmark as well, Claude v1 secures 75.6 points, and GPT-4 scores 86.4. Anthropic also became the first company to offer 100k tokens as the largest context window in its Claude-instant-100k model. You can basically load close to 75,000 words in a single window.