Deepseek just killed LLMs

Deepseek released a new model called “Deepseek OCR”.

While this seems unimportant, the research shows that old-school, traditional LLMs might be obsolete.

Deepseek’s new approach is showing 10x to 20x improvement by not using… language.

If a picture is worth a thousand words, then a video is even better:

Even Andrej Karpathy is rethinking LLMs after seeing this:

He goes on to say “Maybe it makes more sense that all inputs to LLMs should only ever be images.”

Not one to be outdone, Elon chimes in:

This new Deepseek release might have some AI labs rethinking if they should switch from using text and tokens to using… pixels.

-Wes Roth

PS to be clear, “Deepseek just killed LLMs” might be a bit dramatic.

Instead of LLMs, we will train VLMs and instead of tokenizers turning words into tokens, images will be used for input.

Reply

or to participate.