Meta released Llama 3.2 on September 25, 2024, just two months after Llama 3.1. This new version includes both lightweight text-only models (1B and 3B) and larger multimodal models (11B and 90B) capable of processing both text and images.
Llama 3.2 expands on the powerful models introduced in the 3.1 series, making advanced AI tools more accessible to developers.
The 1B and 3B text-only models, in particular, cater to smaller developers who may lack the computational power required for larger models. These models are “state-of-the-art in their class,” outperforming competitors such as Google’s Gemma 2 2B and Microsoft’s Phi-3.5 Mini.
The models also excel in multilingual text generation, summarization, instruction following, and rewriting tasks, all running locally on mobile and edge devices.
The larger 11B and 90B multimodal models process and understand both text and images, supporting image reasoning use cases, such as document-level understanding (including charts and graphs), image captioning, visual grounding (e.g., locating objects in images based on natural language descriptions). These models outperform competitors such as Claude 3 in image understanding.
Although Llama 3.2 has been trained on a broad collection of languages, English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported.
The Llama 3.2 models are available for download on llama.com and Hugging Face, and they are also ready for immediate development on partner platforms like AWS, Google Cloud, Groq, Microsoft Azure, and NVIDIA.
Not Available in Europe
While Llama 3.2 offers new opportunities for AI developers, its most powerful models are restricted from the European market due to concerns over GDPR compliance. Meta’s Acceptable Use Policy clearly states that the rights to use multimodal models within Llama 3.2 are not extended to “individuals domiciled in, or a company with a principal place of business in, the European Union.”
This decision stems from ongoing challenges Meta faces with EU data protection authorities regarding the use of public data for AI training.
Regulatory Uncertainty
In May, Meta announced plans to train AI models using publicly available data from Facebook and Instagram users, including posts, photos, videos, stories, and reels. Private posts or messages are excluded from the training data, and Meta offered EU users a means to opt out of data sharing.
Stefano Fratta, Global Advocacy Director at Meta, highlighted that without training AI models on publicly shared content from EU users, the models would fail to understand regional languages, cultures, or social trends, leading to inadequate service for European users.
Although Meta had informed EU data protection authorities months in advance, the company was ordered to pause training on EU data by June. This was not due to any violation of the law, but to lack of agreement among regulators about how the law should be applied, explained Mark Zuckerberg, Meta Founder and CEO, and Daniel Ek, Spotify Founder and CEO.
While Meta remains “highly confident” that its approach complies with European laws and regulations, this “regulatory uncertainty” led them to pause plans to train their large language model (LLM) on public content shared by adults on Facebook and Instagram across the EU and not release Llama 3.2 multimodal models in Europe.
The Irish Data Protection Commision and the EU data protection authorities welcomed Meta’s decision.
Fratta described this as a “step backwards” for European innovation and competition in AI development. Zuckerberg and EK called for “clear rules” to guide businesses, warning that without “clearer policies and consistent enforcement,” European businesses risk “missing out on the next wave of technology investment and economic-growth opportunities.”
2024 Slator Pro Guide: Translation AI
The 2024 Slator Pro Guide presents 20 new and impactful ways that LLMs can be used to enhance translation workflows.
Frustration Among Developers and AI Enthusiasts
The restrictions on Llama 3.2 have sparked frustration among developers and AI enthusiasts. Many have expressed disappointment on social media and forums over being cut off from these cutting-edge models, particularly as similar vision-based AI tools remain accessible elsewhere.
Some questioned why only Meta’s models were being blocked, demanding more transparency, while others urged Meta to work with the EU data protection authorities to resolve the issue.
One user on X voiced concerns about Europe’s competitive disadvantage, stating, “This is a huge step back in advancing technologies because certain countries have access to more current technologies and can develop upon it.”
The fear of Europe falling behind as AI evolves rapidly elsewhere was evident in a Reddit discussion, with a user warning, “If the EU doesn’t change that stance, there will be no more multimodal models in the EU.”
Another LinkedIn post echoed this sentiment, noting, “While regulation is crucial for protecting privacy, we need smart policies that allow Europe to benefit from cutting-edge AI without compromising ethical standards.”
Calls for policy reform have intensified. An open letter signed by European companies, researchers, and developers urged the EU to reconsider its approach to data laws, highlighting the need for a balance between regulation and innovation.