Google News
logo
Apple debuts MM1 multimodal language models
Last Updated : 03/21/2024 00:01:58

After months of rumours and speculations around their upcoming AI projects and multimodal AI models

Apple debuts MM1 multimodal language models
After months of rumours and speculations around their upcoming AI projects and multimodal AI models, Apple researchers have developed a family of large multimodal language models called MM1, which can process and generate both text and visual data, according to a research paper presented last week.

The study at Apple’s research labs aimed to build performant multimodal large language models (MLLMs) through careful ablation of various architectural components, data sources, and training procedures.

The researchers found that image resolution and the capacity of the visual encoder had the highest impact on model performance, while the specific method of combining visual and text data mattered less.

They also discovered that a careful mix of different data types was crucial, with interleaved image-text documents helping with few-shot learning, traditional captioned images boosting zero-shot performance, and including text-only data maintaining strong language understanding capabilities.

MM1 can perform in-context predictions thanks to its large-scale multimodal pre-training. This allows MM1 to count objects and follow custom formatting, refer to parts of the images and perform OCR, demonstrate common sense and word knowledge about everyday objects, and perform basic math functions.

Based on these insights, the team developed the MM1 model family, ranging from three billion to 30 billion parameters, including dense and mixture-of-experts variants. After scaling up training, MM1 achieved state-of-the-art results on various multimodal benchmarks during pre-training.

Following further instruction tuning on a curated 1 million example dataset, the final MM1 models demonstrated competitive performance across 12 multimodal tasks, such as visual question answering and captioning. Notably, MM1 could perform multi-image reasoning and few-shot learning, critical capabilities enabled by the team’s careful multimodal pre-training approach.

This paper builds upon previous research into areas like CLIP for learning visual representations from natural language supervision, and autoregressive models like GPT for text generation. However, it is one of the first detailed studies focused specifically on large-scale multimodal pre-training.

The researchers hope their insights will accelerate progress, as Apple is reportedly in talks to integrate Google’s Gemini generative AI models into upcoming iPhone software.

Note : This news is only for students, for the purpose of enhancing their knowledge. This news is collected from several companies, the copyrights of this news also belong to those companies like : BBC, CNN, Times of India, Reuters, The Verge, Indian Express, Tech Crunch, News18, Mint, Hindustan Times, Business Today, Techgig etc,.