DORA and Progressive Delivery

3 weeks ago 1

While we were at ETLS, the DORA report was published. This report is a major marker in the year, because it is both a reliably well-constructed report on industry trends, and because enough people read it that it is a useful touchpoint across companies. You should absolutely go download it for free and read it yourself, but here are the parts that I found especially interesting as they relate to Progressive Delivery.

New tools force us to consider old structures

The greatest returns on AI investment come not from the tools themselves, but from a strategic focus on the underlying organizational system: the quality of the internal platform, the clarity of workflows, and the alignment of teams.

I sat up when I read this, because it’s something we’ve been seeing ourselves – knowing what you’re trying to delivery is crucial to actually being able to deliver it. I know, that seems insulting, but I think it’s easy for us to get stuck trying to deliver what we were asked for, not what will actually meet the needs of the organization or user.

What is release, anyway?

The moment when new software has been released is worth celebrating, because its primary value can be determined only once the world can use it. These users may be customers, partners, co workers, strangers, and even other technology systems.

It’s really hard to celebrate “the moment software has been released”, because in a SaaS world with rolling deployment, A/B testing, and ring deployment, “release” is not the sharp, bright line that it once was. Instead, we think about the moment of acceptance, when a user (or system) integrates our changes into their workflow. That is worth celebrating, indeed, because they have found value in our work. The DORA metrics and CI/CD have brought us this far – to the point of release. Now we need to go further and view acceptance as a part of the SDLC.

One size does not fit all

To reduce friction and better support focused work, developers and teams can customize their AI tools. Most IDEs now offer features like toggling inline suggestions, enabling “on-demand only” modes, or adjusting the style and structure of suggestions.

. . .

The key is aligning AI support with the nature of the task and preferences of the developer.

This is an important point about using any tool, whether it’s AI or not – customization and delegating control of features to the user reduces friction and increases adoption.

Cute story, kid

Our findings suggest that absolute trust is not a prerequisite for AI-generated outputs to be useful. This pattern aligns with established behaviors; during our interviews, developers compared this to the healthy skepticism they apply to other widely-used resources, such as solutions found on Stack Overflow, where information is used, but not always implicitly trusted.

I thought this was especially interesting. These results indicate that survey respondents know that the AI is not reliable, and they are forming their impressions of usefulness around that knowledge. This reminds me that we have always know this about modeling:  Alfred Korzybski noted in 1933, “A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.” George Box paraphrased this as “All models are wrong, but some are useful.” I had been assuming that it was a real problem that AI-generated outputs were sometimes hilariously wrong, but it’s true that humans do have a bunch of settings for managing unreliable data – if your toddler tells you about their day chasing dinosaurs, you don’t think of them as lying to you, exactly.

The haters are neither all right, nor all wrong

The statistics on use are clear: The people who answer surveys self-report using AI tools. I am very cautious about assuming that the 84% number that say, StackOverflow respondents or 99% number from Atlassian is representative of the true state of everything. Yes, an overwhelming majority of users appear to be using and getting a benefit from AI tools several days a week. But who is not answering the surveys? Who is a hater, and why? (I want to know this about every transformative technology).

It’ll rot your brain (?)

  • We chose to investigate six sociocognitive constructs that explore how developers view
  • themselves in relation to their work:
  • Authentic pride
  • Meaning of work
  • Need for cognition
  • Existential connection
  • Psychological ownership
  • Skill reprioritization

I’m glad the authors tackled this, since we have just started seeing some studies that say reliance on AI tools may reduce competence through lack of practice. Now, people have literally been complaining about offloading work since Plato threw shade on the laziness of writing as compared to memorization. That said, we honestly don’t know whether we will lose skills, or whether we would miss them if we did. Quick, recite the first book of the Aeneid! (to which almost everyone says a) the what? b) but it’s in a book ,why would I need to?)

I think we won’t know the long-term effects of how AI works until the long term actually arrives. There’s a whole cohort of kids educated in public US schools who got “whole language learning” instead of phonics, and it turns out to have been a bad idea. On the other hand, “numeracy”, which is almost exactly the same concept, but for math, appears to have done no harm, and possibly some good. Sadly, neither of those effects is really measurable until about 10 years later.

What does an AI do?

The section of the report titled DORA AI Capabilities Model may be one of the most useful bits of the whole report. It identifies which parts of “AI” are useful and actually lead to better team outcomes. The capabilities are:

  • Clear and communicated AI stance (for your organization)
  • Healthy data ecosystems
  • AI-accessible internal data
  • Strong version-control practices
  • Working in small batches
  • User-centric focus

I think it’s notable how much those capabilities are about the interface between the organization and the AI, not about what kind of raw power the AI has. Before we AI our orgs, we need to spend some time asking ourselves very basic questions, like “where does it get its data?” and “how do we handle this new tool, on an ethical basis?”.

I want to especially highlight this quote, which goes to both version control practices and small batches:

Specifically, in the presence of more frequent rollbacks, AI’s positive influence on team performance is amplified.

That’s right — seamless, easy, no-fault rollbacks continue to be a good way to increase developer confidence and speed. AI doesn’t change that.

Platform in what sense, exactly?

I love the DORA report, as you can tell from the fact that I read it every year, but I have trouble accepting that 90% of everyone is using one or more corporate platforms at work. I mean, maybe. But we were still explaining platform theory to most people 4 years ago, and enterprise adoption does not run that fast (although, tbf, enterprises are more likely to have platforms, since they grew out of IT orgs). Anyway, I believe that DORA got these numbers, and that being clear about what your platform exists for really does help, but I am not sure it is representative of everyone, especially organizations that are not software companies per se, but ended up accidentally doing a software.

Value-Stream Mapping

First, I want to say that VSM is an absolutely brilliant way to look at your whole workflow. And then I want to say, as a representative of Progressive Delivery, that “Value stream management (VSM) is the practice of visualizing, analyzing, and improving the flow of work from idea to customer.” is great, but why does it stop at the customer? The value in the value stream is crucially enhanced by understanding what happens to the work after it gets to the customer. ahem

The real win is using AI to improve the code review process itself, clearing the actual blockage in the system. That’s what focusing on flow is all about: You want to solve the whole system’s biggest problem, not just speed up a single step.

Teams with mature VSM practices can channel the productivity gains from AI toward solving system-level problems, ensuring that individual improvements translate into broader organizational success.
Without VSM, AI risks creating localized efficiencies that are simply absorbed by downstream bottlenecks, delivering no real value to the organization as a whole.

As AI use becomes more of a corporate norm, we need to decide how to align it with what our users need and what we know already helps us deliver value. It’s a tool, not an answer. Value-Stream Mapping allows us to apply that tool to adding value, not just speed for the sake of vibes.

Deming has entered the chat

The next chapter is about systems thinking, because we are learning that locally optimizing the speed of code development doesn’t improve code quality or stability by itself. If an organization is going to see benefits from AI adoption, it must think of itself as an organizational system, not a code factory where individuals make something in isolation.

Without intentional changes to workflows, roles, governance, and cultural expectations, AI tools are likely to remain isolated boosts in an otherwise unchanged system—a missed opportunity. To scale AI’s impact, organizations should invest in redesigning their systems. That means identifying constraints, streamlining flow, and enabling the conditions where local acceleration becomes organizational momentum.

Don’t eat your seed corn

This chapter is about the problem of junior developers. If we don’t keep educating new developers, we will have a demographic crisis. Senior developers often feel that AI is a force multiplier for them because they know how to manage and double-check outputs, and they have an intuitive feeling for the architectures they are familiar with. That is not always true for new or onboarding developers. Measuring productivity obscures the problem that we may not be measuring increasing competence or capability.

The best organizations will jointly optimize for productivity and skills development among their employees. In fact, in some of my research, great productivity was only achieved by insisting on simultaneous skill development. Measuring and driving for both is the path to sustainable performance

AI tools are changing the processes of work in technology organizations, but the transformation is in work process, not importance or impact. These capabilities are an addition, not a replacement.

The conclusion of the report says:

We found with a high degree of certainty that when teams adopt a user-centric focus, the positive influence of AI on their performance is amplified. Conversely, in the absence of a user-centric focus, AI adoption can have a negative impact on team performance.

Progressive Delivery is about building teams that can break down the next silo, the one between “us” and “the user”. AI can help us do that, but only if we add it to a mindful and well-maintained system. It’s not going to solve anything, but it can help us solve lots of things.

Read Entire Article