The Demonization of DeepSeek: How NIST Turned Open Science into a Security Scare

1 month ago 1

Author’s Note:
This essay reflects my personal analysis and interpretation of NIST’s Evaluation of DeepSeek AI Models (September 2025). The facts and data referenced come directly from that report, which contains no evidence of malicious code, backdoors, or data exfiltration in DeepSeek’s models. My conclusions about the motives, framing, and implications are my own opinions.

The views expressed here are mine alone and do not represent the views of my employer or any affiliated organization. I encourage readers to review the source material and draw their own conclusions.


NIST’s September 30 report on DeepSeek isn’t about security. It’s about control. This post explains what the man behind the curtain doesn’t want you to notice.

NIST's DeepSeek "Evaluation" is a Hit Piece

NIST’s recent report on DeepSeek is not a neutral technical evaluation. It is a political hit piece disguised as science. There is no evidence of backdoors, spyware, or data exfiltration. What is really happening is the U.S. government using fear and misinformation to sabotage open science, open research, and open source. They are attacking gifts to humanity with politics and lies to protect corporate power and preserve control. DeepSeek’s work is a genuine contribution to human knowledge, and it is being discredited for reasons that have nothing to do with security. Before diving in, let’s unpack what DeepSeek actually did and what NIST is really claiming.

What’s Actually Going On

On September 30, 2025, NIST released its so-called technical evaluation of DeepSeek’s AI models. The report triggered panic online: claims that DeepSeek’s weights are compromised, that China is spying through the models, or that downloading them is a security risk.

All of that is false.

Let me be clear: The NIST report provides zero evidence that DeepSeek's model weights contain backdoors, surveillance mechanisms, or any malicious code whatsoever.

What DeepSeek Actually Did

DeepSeek built competitive AI models. Not perfect, but impressive given their budget. They spent far less than OpenAI or Anthropic and still achieved near-frontier performance.

Then they open-sourced everything under Apache 2.0:

  • The model weights

  • The architecture

  • The training methodology

  • The research papers

They made it possible for anyone to reproduce their work and run a frontier-scale model locally. And to recreate it all from scratch. That is one of the biggest contributions to open AI research in years.

The U.S. government’s response? A report labeling them "adversary AI" and implying espionage.

The Core Deception

Here’s the trick NIST pulled.

Scenario A: You use DeepSeek’s App and/or API. Your prompts go to servers in China. That is a real data sovereignty issue, the same as it would be for any model served in a foreign country or an untrusted service provider.

Scenario B: You download the open weights and run them locally. No data leaves your machine.

Scenario C: You run DeepSeek’s model hosted on a trusted third-party service provider such as OpenRouter, Fireworks, or Chutes. In that case, the infrastructure and privacy controls depend on the hosting provider, not on DeepSeek.

These are completely different situations, but NIST intentionally blurred them together. The report counts local downloads while warning about "national security risks." That is misleading, and anyone with basic technical knowledge knows it.

This conflation sets the stage for the rest of the report’s misleading framing.

What NIST Actually Found

Strip away the inflammatory language and here's what they demonstrated:

  1. DeepSeek models are easier to jailbreak than heavily safety-tuned U.S. models.

  2. They echo Chinese government perspectives sometimes

  3. They perform slightly worse on certain benchmarks

  4. They allegedly cost more per token (no methodology provided)

That's it.

No evidence of malicious behavior. No proof of data exfiltration. Nothing that shows these models do anything nefarious beyond "respond to prompts in ways we don't like."

The jailbreaking finding? That's because DeepSeek spent less on safety training. It's a resource issue. NIST conveniently didn't test historical U.S. models for comparison. And meanwhile OpenAI’s gpt-oss-120b is hilariously easy to jailbreak.

The "CCP narratives" finding? A model trained on Chinese data reflects Chinese perspectives (and they are subject to Chinese censorship laws). This is not surprising - and it is not a security vulnerability.

And once you see what they left out, the agenda becomes even clearer.

The Missing Comparisons

Here's what NIST didn't do:

They didn't compare to other open models. Where's Llama? Mistral? Falcon? If they had, they'd reveal this isn't a DeepSeek problem—it's that open models generally have fewer safety layers than proprietary ones.

They didn't compare to early U.S. models. How vulnerable was GPT-3 to jailbreaking in 2020? We'll never know, because that would undermine the narrative.

They didn't test U.S. models for U.S. bias. Only Chinese bias counts as a security risk, apparently.

They used secret benchmarks. "Private benchmarks built by CAISI" that can't be reproduced or verified. This isn't science. This is advocacy research.

What This Report Actually Tells Us

Reading between the lines:

  1. DeepSeek models are less polished. They spent less on development. Of course they have rougher edges.

  2. Chinese models are competitive enough to worry about. If they weren't a threat to market share, this report wouldn't exist.

  3. The U.S. is terrified of losing AI dominance. This was explicitly commissioned under Trump's "AI Action Plan." The Commerce Secretary's statement makes it clear—this is industrial policy, not neutral evaluation.

The Real Threat (Hint: It's Not to You)

You know what DeepSeek actually threatened? The oligopoly.

DeepSeek’s real offense was showing that open source works. That you can build strong models without billions in venture capital or private APIs.

That terrifies the companies selling AI access at premium prices. When DeepSeek said "Here are the weights, run it yourself," they attacked the economic moat those companies depend on.

That is why the NIST report exists. Because DeepSeek proved openness can compete with closed systems, and the establishment needs to shut that down.

The Hypocrisy

NIST warns that DeepSeek models might respond to malicious prompts in simulated environments.

Meanwhile, U.S. models actually send real data to external servers. Remember when OpenAI used ChatGPT conversations for training? When they had to add opt-outs after backlash?

Running DeepSeek weights locally = zero data transmission.

Using OpenAI's API = constant data transmission to their servers.

Which is the privacy risk again?

The report warns about "adoption of foreign AI" while ignoring that every cloud API, U.S. or otherwise, requires trusting someone else's infrastructure. Local open weights are MORE auditable and MORE secure than any cloud service, period.

But that's not the message. Because this was never about security. It is about narrative control.

The Betrayal of Open Source and Open Science

The open source community built the foundation that modern AI rests on. Linux. Python. PyTorch. Transformers. Decades of collaborative development, freely shared.

DeepSeek participated in that tradition. They took open knowledge, built something impressive, and gave it back.

American institutions responded by calling them a threat.

Imagine if China had done this to Meta when they released Llama. If they'd published a government report claiming Llama weights were surveillance tools because they're "vulnerable to jailbreaking."

We'd call it protectionism. Technological paranoia. An attack on open research.

But when we do it? "National security."

Open research should be universal. You cannot champion open science only when it is convenient.

The Test You Can Run Yourself

Don't trust me. Don't trust NIST.

Download DeepSeek's weights. Run them locally using huggingface transformers, vLLM, LM Studio, or llama.cpp. Open your network monitoring tools.

Watch as exactly zero packets are sent anywhere.

Watch as your prompts process entirely on your machine.

Watch as the terrible "security threat" sits there doing matrix multiplication with no connection to anything.

Then ask yourself: why is the US government lying about this?

The "security threat" is not in the model. It is in the politics.

What You Should Actually Be Concerned About

There ARE legitimate concerns:

Using DeepSeek's API: If you send sensitive data to DeepSeek's hosted service, that data goes through Chinese infrastructure. This is a real data sovereignty issue, the same as using any foreign cloud provider.

Jailbreak susceptibility: If you're building production applications, you need to test ANY model for vulnerabilities and implement application-level safeguards. Don't rely solely on model guardrails. Also - use an inference time guard model (such as LlamaGuard or Qwen3Guard) to classify and filter both prompts and responses.

Bias and censorship: All models reflect their training data. Be aware of this regardless of which model you use.

These are engineering challenges, not reasons to avoid open source (or Chinese) models entirely.

What This Means for AI's Future

This is not just about DeepSeek. It is about whether AI stays open and auditable or gets fenced in by governments and corporations.

Will we let "open source" be redefined as "open, but only if it is American"?
Will we demand real evidence for security claims, or accept vague insinuations?
Will AI remain a shared human project, or become a geopolitical weapon?

DeepSeek proved there is another path. That is why it had to be discredited.

My Hot Take

I run open source models locally. I train my own models. I believe in composable alignment and user freedom. I think AI should be a tool that serves users, not corporations or governments.

The NIST report is not a neutral technical evaluation. It's a policy document designed to discourage adoption of Chinese AI models to protect American commercial and strategic interests.

There's nothing inherently wrong with the U.S. government promoting American industry. But call it what it is. Don't dress up protectionism as security research. Don't fabricate threats. Don't lie to the public about what the evidence shows.

DeepSeek gave us a precious, invaluable gift. The weights are just safetensor data. They sit on your drive and do what you tell them to do. They don't phone home. They don't spy. They don't exfiltrate data.

If you're worried about them, you don't understand how local inference works.

If you believe the fear-mongering, you've been successfully manipulated.

None of this is about safety. It’s about power - who gets to build, share, and understand the tools that shape the future.

Conclusion

The code and the research is open source and auditable. Everything else is politics.

Read the NIST report and the code, for yourself. Look for any actual evidence of malicious code or surveillance capabilities. You won't find any, because there isn't any.

Then maybe start asking why you're supposed to be afraid of open source when it works a little too well.


Please Note: I make uncensored models like Dolphin. I believe in open source, user freedom, and composable alignment. If you disagree with these principles, you're probably on the wrong blog. For everyone else - enjoy responsibly. You are responsible for what you do with these tools, just like you're responsible for what you do with a knife, a lighter, or a car.

https://ko-fi.com/erichartford

Read Entire Article