Thomas Wolf

Home / Blog / šŸ³ Some notes on "DeepSeek and export control"

šŸ³ Some notes on "DeepSeek and export control"

Finally took time to go over Dario's essay on DeepSeek and export control and to be honest it was quite painful to read. And I say this as a great admirer of Anthropic and big user of Claude*

The first half of the essay reads like a lengthy attempt to justify that closed-source models are still significantly ahead of DeepSeek. However, it mostly refers to internal unpublished evals which limit the credit you can give it, and statements like Ā« DeepSeek-V3 is close to SOTA models and stronger on some very narrow tasks Ā» transforming in a general conclusion Ā« DeepSeek-V3 is actually worse than those US frontier models ā€” letā€™s say by ~2x on the scaling curve Ā» left me generally doubtful. The same applies to the takeaway that all discoveries and efficiency improvements of DeepSeek have been discovered long ago by closed-models companies, this statement mostly resulting from a comparison of DeepSeek openly published $6M training numbers with some vague Ā« few $10M Ā» on Anthropic side without providing much more details. I have no doubts the Anthropic team is extremely talented and Iā€™ve regularly shared how impressed I am with Sonnet 3.5 but this longwinded comparison of open research with vague closed research and undisclosed evals has left me less convinced of their lead than I was before I reading it.

Even more frustrating was the second half of the essay which dive into the US-China race scenario and totally misses the point that the DeepSeek model is open-weights, and largely open-knowledge due to its detailed tech report (and feel free to follow Hugging Faceā€™s open-r1 reproduction project for the remaining non-public part: the synthetic dataset). If both DeepSeek and Anthropic models had been closed source, yes the arm-race interpretation could have make sense but having one of the model freely widely available for download and with detailed scientific report renders the whole Ā« close-source arm-race competition Ā» argument artificial and unconvincing in my opinion.

Here is the thing: open-source knows no border. Both in its usage and its creation.

Every company in the world, be it in Europe, Africa, South-America or the USA can now directly download and use DeepSeek without sending data to a specific country (China for instance) or depending on a specific company or server for running the core part of its technology.

And just like most open-source library in the world are typically built by contributors from all over the world, weā€™ve already seen several hundred derivative models on the Hugging Face hub created everywhere in the world by teams adapting the original model to their specific use cases and explorations.

What's more, with the open-r1 reproduction and the DeepSeek paper, the coming months will clearly see many open-source reasoning models being released by teams from all over the world. Just today, two other teams, AllenAI in Seattle and Mistral in Paris both independently released open-source base models (TĆ¼lu and Small3) which are already challenging the new state-of-the-art (with AllenAI indicating that its TĆ¼lu model surpasses the performance of DeepSeek-V3).

And the scope is even much broader than this geographical aspect. Here is the thing we donā€™t talk nearly enough about: open-source will be more and more essential for ourā€¦ safety!

As AI becomes central to our lives, resiliency will increasingly become a very important element of this technology. Today weā€™re dependent on internet access for almost everything. Without access to the internet, we lose all our social media/news feeds, canā€™t order a taxi, book a restaurant, or reach someone on WhatsApp. Now imagine an alternate world to ours where all the data transiting through the internet would have to go through a single companyā€™s data centers. The day this company suffers a single outage, the whole world would basically stop spinning (picture the recent CrowdStrike outage magnified a millionfold).

Soon, as AI assistants and AI technology permeate our whole life to simplify many of our online and offline tasks, we (and companies using AI) will start to depend more on more on this technology for our daily activities and we will similarly start to find annoying or even painful any downtime in these AI assistants from outages.

The most optimal way to avoid future downtime situations will be to build resilience deep in our technological chain.

Open-source has many advantages like shared training costs, tunability, control, ownership, privacy but one of its most fundamental virtue in the long term ā€“as AI becomes deeply embedded in our worldā€“ will likely be its strong resilience. It is one of the most straightforward and cost-effective ways to easily distribute compute across many independent providers and to even run models locally and on device with minimal complexity.

More than national prides and competitions, I think itā€™s time to start thinking globally about the challenges and social changes that AI will bring everywhere in the world. And open-source technology is likely our most important asset for safely transitioning to a resilient digital future where AI is integrated into all aspects of society.

*Claude is my default LLM for complex coding. I also love its character with hesitations and pondering, like a prelude to the chain-of-thoughts of more recent reasoning models like DeepSeek generations.

Copyright Thomas Wolf 2017-2025