Let’s be frank the single‑player notebook has felt outdated for a while now. We’re open‑sourcing its successor. Jupyter belongs in the hall of great ideas — alongside “Hello, world.” and “View Source.”
However, it fails modern data teams, data agents, and… data itself.
Teams need notebooks that are reactive, collaborative, and AI‑ready. If your workflow still depends on tools that are not ready for the next decade, your tools are holding you back.
Today, we’re announcing that Deepnote is going open source.
We’re doing this as a way to share our learnings over past 7 years of working to power the worfklows of 500,000 data professionals and over 21% of Fortune 500.
We’re also doing this because the center of gravity has moved: from single‑player JSON scrolls to reactive, AI‑ready projects that humans and agents can co‑author, review, and deploy. We’re opening the format and the building blocks so the community has a standard that is purpose-built for AI.
First principles: we still love notebooks
We are big believers in notebooks — full stop. They are:
Perfect for data exploration
Perfect for collaborating with AI agents
Perfect for bringing technical and non‑technical users together
In other words, they are the universal computational medium we’ve been all looking for.
Why companies are leaving Jupyter
You already know the papercuts because you live with them: no native integrations, confusing UX, shaky collaboration and versioning, brittle reproducibility, spotty visualization support, and no first‑class AI. On a small project you can squint; at team scale, those are blockers.
Meanwhile, the market is voting with its feet. Across the Fortune 1000, job postings that mention and require Jupyter knowledge are down sharply; the most recent month was deep in the red YTD.
On the other side, there’s slow activity in core Jupyter repos. If look at the contributions graph, you’ll see low commit velocity and very few unique contributors, including multi‑month holidays with no commits.
What’s wrong with the old notebooks
Let’s start with the expensive elephant: internal data platforms built on top of Jupyter. The single‑player notebook model turns you into a platform vendor. That’s not your job. Lots of teams did the sensible thing in 2019–2023: stand up JupyterHub, glue on extensions, wire auth, deploy it on a k8s cluster, and call it a “platform.” It worked - until it didn’t. We see this in the wild. The total cost of ownership is now eating the roadmap for those teams, making them less nimble in the age where speed is everything. Rather than using cutting-edge software, the users are stuck with a platform that didn’t change since 2019.
Where the cost hides (and compounds):
People tax: Platform/SRE/Sec/DS‑ops spend cycles maintaining JupyterHub, kernels,
Productivity stall: Teams are still using pre‑AI, single‑player tooling, so output doesn’t compound year‑over‑year; you can't even slap AI and call it a day - agether data nor the rest of collaboration infra is AI-ready.
People tax: Platform/SRE/Sec/DS‑ops spend cycles maintaining JupyterHub, kernels, proxies, extensions, images, and brittle auth. Every upgrade is a change‑freeze prayer circle.
Connector upkeep: Building data connectors, rotating and sharing creds, chase schema drift, and re‑testing pipelines on every vendor change.
Collaboration debt: No first‑class versioning/comments/reviews; unreadable diffs; flaky reproducibility; ghost kernel state. “Run All” becomes ceremony—and a source of incidents.
Compliance drag: Audit trails, RBAC, data residency, PII egress checks, and access reviews get bolted on per team.
Compute waste: Idle kernels and full‑notebook re‑runs burn budgets; without a reactive dependency graph, every tweak reheats the whole pipeline.
AI bolt‑ons: Chat helpers that can’t see state, can’t run tools, and can’t re‑run downstream safely; you end up owning fragile agent plumbing.
Opportunity cost: Your best engineers ship platform fixes instead of features. The backlog grows while competitors move.
Real‑world TCO snapshots
Fintech, 50‑person data org (self‑hosted JupyterHub)
Platform: ~3 FTE (platform/SRE/security/enablement) ≈ $2.0M fully‑loaded over 3 years.
Glue: 14 custom extensions + 9 connectors maintained across repos; ~6 hrs/week/analyst on environment drift and re‑runs.
Incidents: 4 auth/secret‑rotation breakages per year; 2 post‑mortems tied to stale kernel state.
After adopting Deepnote format + reactive execution: 35–45% fewer re‑runs; 1.5 FTE reallocated to product work; ~$180k/year infra savings from fewer idle kernels.
This is also where skeptics usually jump in:
“But Jupyter is the standard.” It was. Standards stay standards only if they keep TCO low while meeting reality: mixed‑skill teams, CI, governance, and agents that lint, refactor, and re‑run.
“Is this a marketing gimmick?”
Hold us to the implementation: an open, human‑readable project you can diff and automate, round‑trippable with .ipynb, shaped in public by a very large community. That’s the opposite of lock‑in.“AI assistants don’t belong in notebooks.”
They already work there—badly—when the substrate is wrong. With reactive blocks and explicit structure, agents become safe and useful instead of noisy.
Your next data project shouldn’t start by typing “import numpy as np…”. It should start with the description of the problem and AI scaffolding the project for you.
What you get (at a glance)
AI agents (single‑player authoring coming soon)
One workspace for technical and non‑technical users together
Seamless collaboration: native versioning, comments, reviews; human‑readable projects with clean diffs
Blocks that go beyond code: SQL queries, Python/R code blocks, charts, tables, inputs (text/number/select/slider/date), file upload, buttons/actions, layout/app pages, reusable modules
Beautiful apps: deploy your notebooks as interactive dashboards or data apps in a single click
100+ native integrations with governed secrets
Reactive execution so downstream blocks update automatically
All of this is open and designed to work across your stack. While the best experience is in Deepnote, we’re backporting key pieces so you can use them in Jupyter, VS Code, Cursor, or Windsurf.
Why it matters (no fluff):
Spend engineering time on product — not platform plumbing. Deepnote gives you the core out of the box.
Fewer re‑runs and less compute waste (reactive graph)
Less glue code and secret sprawl (native integrations)
Faster reviews and cleaner audits (diff‑able projects, governed workspaces)
No lock‑in (open standard; export as .ipynb where and if you need it)
Pure developer experience and happiness. Having data workspace you're looking forward to open could be a night and day in dev productivity.
What we opened — in English, not a changelog
A notebook format ready for the next decade.
Goodbye messy JSON with random metadata. Deepnote projects use a human‑and AI-readable text file you can review in Git and automate in CI. It’s extensible for blocks of the future — not just code and markdown, but SQL, headings, bullet lists, charts, inputs, KPIs, and more. It’s also AI‑readable: explicit structure and dependencies so agents can safely reason, edit, and re‑run. And when you need it, portability is built in — round‑trip to .ipynb without lock‑in.
A format without a lock-in
You can move between .ipynb and the Deepnote format as needed. Keep your pipelines. Keep your habits. Keep your exit. We’re allergic to lock‑in because standards outlast companies.
Reactive by default.
Change a parameter, and downstream blocks update — like spreadsheets, except it’s your real analysis, not a shadow model. No more "Run All" and pray. This is how notebooks become apps, not demos.
AI that understands your work, not just your prompt.
Copilot helps you write SQL and Python in context. The next step is inevitable: a single‑player Agent that goes from a prompt to a whole notebook — code, queries, charts, inputs — then iterates as you nudge.
With open-sourcing Deepnote, we’re doing the responsible thing: supporting the community while opening the path forward. If you’re all‑in on classic notebooks, keep them — you can convert anytime for want sleeker UI, data integrations, and AI, SQL, and collaboration as first-class citizens. If you’re ready to move, you can bring your team with one simple command.
An open invitation
With gratitude to Jupyter, we've built the Jupyter notebook for the AI era, and the enterprise context. In this spirit, we're building the open standard for AI notebooks, data dashboards and apps. An open standard that is human‑readable, diff‑able, AI‑ready, and fully compatible with .ipynb.
But an open standard isn't open without feedback from you, as a community - that's why we welcome you to check out our GitHub repo and create your first .deepnote notebook today.
We decided to build Deepnote to bring beautiful yet powerful notebooks for the data science community. Along the way, we solved a lot of the problems that still exist in Jupyter today.
Today, Deepnote serves as a mature, enterprise-grade replacement of existing Jupyter deployments based on an open standard that’s human-readable, AI-native and ready for the next decade.
An open standard isn’t truly open without the community. We’re committed to maintaining and developing the standard, but we need feedback from you, the community, to tell us what you’re missing. Go and check out our GitHub repos, create your first .deepnote notebook today, and help us define the future of data notebooks.
.png)


