LLaMA isn’t fully “open”. You have to agree to some strict terms to access the model. It’s intended as a research preview, and isn’t something which can be used for commercial purposes. (View Highlight)
People could now generate images from text on their own hardware! (View Highlight)
Note: Large language models have enabled people to generate images from text using their own hardware. This has enabled a stable diffusion of the technology, allowing more people to use and explore the capabilities of language models.
I thought it would be a few more years before I could run a GPT-3 class model on hardware that I owned. I was wrong: that future is here already. (View Highlight)
Stable Diffusion moment is happening again right now, for large language models (View Highlight)
But there are a ton of very real ways in which this technology can be used for harm. Just a few:
• Generating spam
• Automated romance scams
• Trolling and hate speech
• Fake news and disinformation
• Automated radicalization (I worry about this one a lot) (View Highlight)
LLaMA will likely end up more a proof-of-concept that local language models are feasible on consumer hardware than a new foundation model that people use going forward. (View Highlight)