Home Artists Posts Import Register

Downloads

Content

There is a new open source AI out there. It was published by Meta and it’s one of the most powerful models ever developed. And now a lot of people want to stop it. They say Meta is too dangerous and irresponsible. And what if they are right this time? Why would Mark Zuckerberg do this? Who should you believe and what if open source AI is a mistake?

Let me shine a light on all of these things that I’ve been following extremely closely as I believe what will transpire over the next months and years will profoundly affect all of our digital lives.

If you like this analysis, please support my work on Patreon.

On July 23, Meta published a new stack of AI models as Llama 3.1 packaged into three different sizes. There are two models small enough to run on consumer hardware with just 8 and 70 billion parameters. But we’ve had those for a while. What really changes everything is Meta’s largest model to date – with whopping 405B paramaters, it’s the biggest and most capable open source LLM ever created. [0]

For Mark Zuckerberg, this is not a just a product launch. This is a political maneuver.

The 405b model is now capable of rivaling and even outperforming ChatGPT. No open source AI has done this before.

But unlike GPT-4, anyone in the world can copy, modify and redistribute Meta’s flagship AI without almost any restrictions. [1]

Zuckerberg intentionally did this to undercut OpenAI’s business model, which also undercuts Microsoft, Google and other companies developing closed source AI. [2 – 4]

Closed source AI is one you can’t copy or use outside of their owners proprietary app and website.

This partly explains why Meta would give away all of its AI technology to everyone for free. But that’s not all of it. More on that in a moment.

If Meta succeeds to capture the market, open source will be the default standard for the entire AI industry. [4] And there are many that don’t like such prospects and have tried everything in their power to stop it. And they’ve been running a scheme that’s been unfolding behind the scenes.

Using fears of existential threat, Google, OpenAI and Microsoft have been aggressively lobbying governments and legislators to regulate open source AI out of existence. [5]

Not without the help of the Effective Altruism movement literally funding staffers that would whisper to policy-makers on how they should craft their bills. [5, 6]

However, this scheme was quickly uncovered by some excellent investigative journalism from a handful of outlets that noticed something was off. And it probably helped they were tipped off by some of the key figures in the industry. They saw that the big tech was hyping up unfounded fears of AI to use regulation to kill their competition.

There is more about this in my video where I explain how these hypothetical extinction scenarios are not based on scientific evidence and do not reflect academic consensus. [7 – 9]

But that was at the time when even the most capable open source models were tiny compared to the giants like GPT-4 or Gemini. ChatGPT operated with models that were at least 10-times the size of the most cutting-edge open source AI. But Llama 3.1 changes everything.

We don’t know exactly how big GPT-4 is because OpenAI keeps that information secret. [11] But it is estimated to be around 1.5 trillion parameters. Still three times larger than Llama 3.1 405b. [4]

Yet, Llama is performing exceedingly well. According to Meta’s own benchmarks, which must be taken with a grain of salt, it’s just as capable as a model twice its size. [10]

In human evaluations, Llama 3.1 405B is performing neck and neck with GTP-4 and Claude 3.5. OpenAI’s GPT may still come on top but that’s irrelevant to this conversation. [12]

What is relevant is that Llama is close enough. And that anyone in the world can get their hands on this powerful technology without asking for permission. All because Mark Zuckerberg decided so. But what if he made a mistake? What if we should have never open sourced such powerful models?

Meta’s open source models are being accused for aiding enemies and bad actors seeking to cause harm. US think tank Control AI has attacked Zuckerberg for refusing to take responsibility for damages and catastrophic outcomes his tech may cause. Future of Life institute says authoritarian regimes will weaponize Llama against their own population. That Llama will be used for cyberattacks and propaganda campaigns, abetting enemies and adversaries. [3]

There are fears that geopolitical rivals like China or Russia will use open source models to wage cyber-offensive campaigns against western targets. [13]

And that criminals everywhere will use Meta’s AI, strip it of its safeguards, and abuse it to build hacking tools or biological and chemical weapons. [14]

Should we act upon these concerns at face value, it means we must ban proliferation of AI technology and impose strict limits on who can and cannot have it. So it’s absolutely paramount to evaluate whether or not these concerns are valid. So, are they?

Who is right

When it comes to bad actors abusing open source AI, yes, that is indeed possible. For that reason, Meta released all of their models with very restricting guardrails. They are heavily censored models out of the box. Which at this time also includes restrictions on election-related prompts. Zuckerberg doesn’t deny that intentional harm is something bad actors out there will always attempt to do. Even with all the safeguards in place, all it takes is for one rogue developer to jailbreak open source AI and remove its restrictions entirely. [3]

But that is not something only open models are vulnerable to. Closed source models are just as easily hackable and exploitable. Yes, their benefit is that the only way to access them is through closely monitored and moderated channels. But they are not impenetrable. Far from it. [25]

There is a wealth of published research that showcases multiple ways to bypass OpenAI’s safety alignment, evade its monitoring system and jailbreak ChatGPT. This can be done with an easy identity shifting attack (a.k.a. role playing) carefully written to avoid triggering the moderation system. Easily achievable by anyone with a computer. It’s the underlying nature of all user-facing AI models to be inherently vulnerable. So if closed-source proponents care about harm and abuse so much, they should advocate for banning development of any large models, open or closed. You wouldn’t go out of your way to develop a 1.5T AI. [25 – 27]

Keeping AI proprietary doesn’t protect from foreign states hacking into corporate premises and stealing their tech on a thumb drive. China is notoriously proficient at this and companies should not pretend their premises are not impregnable just because of a software license.

Another often repeated fear is that AI will help terrorists make biological and chemical weapons. But the AI is not gonna give them information that isn’t already available via a simple Google search. Instruction manuals like that are pulled from publicly available information, and they are gonna end up in training data of closed source models just as much as open source ones. [14, 15]

If open source is the problem, then it should also be a problem that we have open source encryption that allows criminals and abusers to evade law enforcement. Wait… I shouldn’t be giving them any ideas. They actually want to ban encryption.

Open source is just a method of ensuring access to technology. That universities, small business and regular users aren’t locked out of benefits of progress, or that they are not beholden to a small number of giant corporations controlling the tech.

Zuck is right on this. But let’s not delude ourselves that he is doing this purely altruistically. How could this strategy of giving your expensive tech away for free ever pay-off to a trillion-dollar tech giant?

Remember how Android is the most widespread mobile OS in the world? [16]

Android is a free and open source software. It’s owned and developed by Google, but anyone in the world can take it without paying Google a cent. [17]

And yet, Google exercises a great amount of control over the Android ecosystem. How? If you want to sell a phone without the Google Play Store, customers will not be happy they can’t find their favorite apps on your phone. And if you are a developer and do not publish to the Google Play Store, most users will never find your app. [18, 19]

Zuckerberg is looking for this level of ecosystem dominance. It’s no accident he is mentioning the word ecosystem 13 times in his open source manifesto. His ultimate goal is for Llama to be a dominant AI platform. He is working with other companies, including Nvidia, Oracle and Amazon, to start offering Llama models on their cloud services. [20]

Zuck puts Meta in opposition to the walled garden model of the iPhone where Apple authoritatively dictates to and taxes developers. He is correct about Apple. His open sourcing of AI is good for everyone. But I wouldn’t trust his ambitions to be the King of open source AI either.

Luckily, open source startup funding is booming. There is more companies than just the big tech and they are raising funds to build models they release open source. The most notable ones are Hugging Face, Mistral and Stability AI. [2]

This is where the long-term solution lies. Not everyone thinks open sourcing AI is dangerous. Dan Hendrycks from Center for AI Safety thinks open source models allow for better research and understanding of potential risks. [14] Mozilla is also pooling together researchers, investors and developers to support building of safe and open AI tools. [8] [21]

That’s seems to be right answer. At least for now. The logic of open source is to enable more transparency and equity, so that all of humanity benefits from technological progress. Zuck is correct to point out that most of today’s big tech stands on the shoulders of open source giants. Like Linux, the Web, or all of the Internet. [22 – 24]

These video analyses take a lot of work and YouTube has been treating my channel like garbage for a long time. If you like what I do, please support me on Patreon and unlock early access, exclusive podcast and merch. Without your help, this work would not exist. Thank you.

Sources

[0] https://ai.meta.com/blog/meta-llama-3-1/

[1] https://huggingface.co/meta-llama/Meta-Llama-3.1-405B

[2] https://www.ft.com/content/a09e4aaf-be52-4a45-86a7-c6d1636526bc

[3] https://time.com/7002563/mark-zuckerberg-ai-llama-meta-open-source/

[4] https://www.engadget.com/llama-31-is-metas-latest-salvo-in-the-battle-for-ai-dominance-150042924.html

[5] https://www.politico.com/news/2023/12/03/congress-ai-fellows-tech-companies-00129701

[6] https://www.politico.eu/article/rishi-sunak-artificial-intelligence-pivot-safety-summit-united-kingdom-silicon-valley-effective-altruism/

[7] https://www.afr.com/technology/google-brain-founder-says-big-tech-is-lying-about-ai-human-extinction-danger-20231027-p5efnz

[8] https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3

[9] https://carnegieendowment.org/2023/09/14/how-hype-over-ai-superintelligence-could-lead-policy-astray-pub-90564

[10] https://www.msn.com/en-us/news/technology/meta-llama-3-1-is-one-of-the-most-important-ai-releases-of-the-year-here-s-how-to-try-it/ar-BB1qyIJl

[11] https://cdn.openai.com/papers/gpt-4.pdf

[12] https://arstechnica.com/information-technology/2024/07/the-first-gpt-4-class-ai-model-anyone-can-download-has-arrived-llama-405b/

[13] https://www.latimes.com/business/story/2024-07-23/zuckerberg-aims-to-rival-open-ai-google-with-new-llama-ai-model

[14] https://www.wired.com/story/meta-ai-llama-3/

[15] https://fortune.com/2024/07/23/meta-new-llama-model-3-1/

[16] https://gs.statcounter.com/os-market-share/

[17] https://source.android.com/

[18] https://www.axios.com/2022/09/14/google-loses-appeal-eu-antitrust-ruling

[19] https://www.npr.org/2023/09/12/1198558372/doj-google-monopoly-antitrust-trial-search-engine

[20] https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/

[21] https://blog.mozilla.org/en/mozilla/introducing-mozilla-ai-investing-in-trustworthy-ai/

[22] https://www.weforum.org/agenda/2023/12/ai-regulation-open-source/

[23] https://www.technologyreview.com/2024/03/25/1090111/tech-industry-open-source-ai-definition-problem/

[24] https://spectrum.ieee.org/open-source-ai-good

[25] https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html

[26] https://arxiv.org/abs/2310.15469

[27] https://arxiv.org/abs/2310.03693

Music by https://www.youtube.com/@co.agmusic









Comments

The Hated One

Feel free to comment on here on YouTube: https://youtu.be/RhsKgvGue0w