Its all Ai all the time now though, not seen any mention of our reimagined future of floating heads hanging out together in quite some time.
I would’ve hoped to have seen Meta, in their supposed dedication to open source, actually fix it.
I would have imagined such a thing would be smaller and thus run on smaller configurations.
But since I am only a layman maybe someone can tell me why this isn't the case?
0. Introducing Llama API in preview
This one is good but not centre stage worthy. Other [closed] models have been offering this for a long time.
1. Fast inference with Llama API
How fast? and how must faster than others? This section talks about latency and there's absolutely no numbers in this section!
2. New Llama Stack integrations
Speculations with 0 new integration. Llama Stack with NVIDIA had already been announced and then this section ends with '...others on new integrations that will be announced soon. Alongside our partners, we envision Llama Stack as the industry standard for enterprises looking to seamlessly deploy production-grade turnkey AI solutions.'
3. New Llama Protections and security for the open source community
This one is not only the best on this page, but is actually good with announcement of - Llama Guard 4, LlamaFirewall, and Llama Prompt Guard 2
4. Meet the Llama Impact Grant recipients
Sorry but neither the gross amount $1.5 million USD, nor the average $150K/recipients is anything significant at Facebook scale.
Like, literally building smart homes.
Locally intelligent in ways that enable truly magical smart home experiences while preserving privacy and building trust.
But connected in ways that facilitate pseudo-social interactions, entertainment, and commerce.
Meta's biggest competitors are Apple and Amazon. This is the first clear opportunity they've had to leapfrog both.
1. Llama API Preview: Launched a limited preview of the Llama API, a developer platform simplifying Llama application development with easy API key creation, playgrounds, SDKs, and tools for fine-tuning and evaluation. It emphasizes model portability and privacy.
2. Fast Inference Collaborations: Announced collaborations with Cerebras and Groq to offer developers access to faster Llama model inference speeds via the Llama API.
3. Expanded Llama Stack Integrations: Revealed new and expanded Llama Stack integrations with partners like NVIDIA, IBM, Red Hat, and Dell Technologies to make deploying Llama applications easier for enterprises.
4. New Llama Protection Tools & Program: Released new open-source security tools including Llama Guard 4, LlamaFirewall, and Llama Prompt Guard 2, updated CyberSecEval 4, and announced the Llama Defenders Program for partners to help evaluate system security.
5. Llama Impact Grant Recipients: Announced the 10 international recipients of the second Llama Impact Grants, awarding over $1.5 million USD to support projects using Llama for transformative change.
Overall, the announcements emphasize making Llama more accessible, easier to build with, faster, more secure, and supporting its diverse open-source community.