The impression I get with DeepSeek is that their goal is to largely do research for the sake of research.
I think it’s not fair to call DeepSeek open source. They’ve released the weights of their model but that’s all. The code they used to train it and the training data itself is decidedly not open source. They aren’t the only company to release their weights either. Meta’s LlaMa was probably the best open weight model you could use prior to DS v3. As I see it, this is just a consequence of competition in a market where capital has nowhere else to go. Meta and DeepSeek likely want to prevent OpenAI from becoming profitable.
As an aside, although I personally believe in some aspects of China’s reform and opening up it’s not without its faults. Tech companies in China often make the same absurd claims and engage in behavior that’s as deluded as companies in Silicon Valley.
My main point is that the limitations of the approach that people keep fixating on don’t appear to be inherent in the way the algorithm works, they’re just an artifact of people still figuring out how to apply this algorithm in an efficient way. The fact that massive improvements have already been found suggests that there’s probably a while yet before we run out of ideas
I think this is our core disagreement. I agree, we have not pushed LLMs to their absolute limit. Mixture of Experts models, optimized training, and “reasoning models” are all incremental improvements over the previous generation of LLMs. That said, I strongly believe that the architecture of LLMs are fundamentally incapable of intelligent behavior. They’re more like a photograph of intelligence than the real thing.
I think that exploration for the sake of exploration is the correct view to have here.
I agree wholeheartedly. However, you don’t need to dump an absurd amount of resources into training an llm to test the viability of any of the incremental improvements that DeepSeek has made. You only do that if your goal is to compete with OpenAI and others for access to capital.
However, some people do make a genuine effort to understand how human cognition works.
Yes, but that work largely goes unnoticed because it’s not at all close to providing us with a way to build intelligent machines. It’s work that can only really happen at academic or public research institutions because it’s not profitable at this stage. I would be much happier if the capital currently directed towards LLMs was redirected towards this type of work. Unfortunately, we’re forced to abide by the dictates of capitalism and so that won’t happen anytime soon.
Investing in the Chinese stock market isn’t usually profitable though!