Authority Gradients

1 month ago 6

You’re a team lead. The team present a problem, and you have a great idea! You propose a solution, and everyone defers. You bask in the glory having successfully saved the day yet again. Except you didn’t solve the problem, you silenced the signal.

You’ve just seen the authority gradient in action.

You’re probably familiar with the HIPPO effect? This refers to the tendency to prioritize the opinions of the highest-paid person over everything else. There’s also the babble effect where those who talk the most are seen as a leader. If you say enough words and do the gish gallop you can create an authority gradient (probably for all the wrong reasons).

At the other end of the spectrum, you can say nothing. This creates a risk from unspoken concerns. This might sound harmless, but it can have real world consequences. In my previous piece about aviation safety, I shared this example of communication breakdown caused by an authority gradient.

Captain: “It’s spooled. Real cold, real cold.” Co-pilot: “God, look at that thing. That don’t seem right, does it? Uh, that’s not right.” Captain: “Yes it is, there’s eighty.” Co-pilot: “Naw, I don’t think that’s right. Ah, maybe it is.”

Notice the tentative language? “That don’t seem right, does it?” and “Ah, maybe it is.” The co-pilot could see the danger but couldn’t overcome the authority gradient to communicate it clearly. The plane crashed into the Potomac River.

In Aviation, crews apply Crew Resource Management(CRM) as a practice to level the authority gradient. Everyone is obligated to speak up, and this results in higher levels of safety. And it’s just the same in Tech, Google’s Project Aristotle found that an environment where folks can be vulnerable and ask questions was the number one factor in high-performing teams.

An image that breaks up the text. And I can’t think of a witty caption.

Tech teams face a perfect storm of authority gradient problems. We operate in fast-moving systems where experts rely heavily on pattern-matching, which often misfires when conditions change. The senior developer who’s seen “this exact problem before” might miss that the context has subtly shifted.

Meanwhile, novices have a superpower: fresh eyes that see truths veterans miss. But they’re precisely the people who hesitate most to speak up. How many times has a junior developer noticed something in code review but stayed quiet because “surely the senior engineer knows what they’re doing”?

There’s also what I call the Spock factor. Tech culture prizes logic and rationality, but we systematically ignore tone, hierarchy, and social dynamics—the very forces that actually decide what gets heard. We act as if technical merit alone should win arguments, while authority gradients operate in the background, shaping every conversation.

Enter AI, and we have a completely new type of authority gradient: confidence without calibration. Large language models sound authoritative even when they’re completely wrong. They never hesitate, never show uncertainty, never say “I don’t know.” (I know a few people like this too 🙂).

AI combines the babble and the hippo (AI’s expensive!) effect to create a tremendously effective machine that risks drowning out contrary opinions. Couple that with automation bias, and you’ve got the perfect tool to abuse the authority gradient.

Using AI can help even the gradient. Less senior folk can use it to generate drafts, polish their counterarguments or finesse their ideas. With an AI co-pilot helping, perhaps they feel more confident in speaking up and levelling the field? On the other hand, if we remove the critical thinking and treat AI as “truth” then we risk shrinking dissent even further.

LLMs are co-pilots, just like in the aviation sense, except that they carry zero accountability. Just as aviation retrained captains and co-pilots to challenge each other, we must train humans to challenge our AI counterparts.

So how do we get this right?

  • Start with humans - Use humans to get the context right, understand the constraints and concerns and then use AI to help synthesize or explore alternatives.

  • Treat AI output as a hypothesis - Frame conversations around critique and verification rather than jumping to implementation.

And finally, remember AI has no skin in the game!

Don’t let AI become the new captain in the cockpit. Treat it as what it is: a co-pilot whose value comes from collaboration, not command. The goal isn’t to eliminate authority gradients entirely, while realizing that expertise and experience should guide decisions.

In aviation, that difference literally saves lives. In tech, it might just save your product from poor decisions and your team’s ability to learn, adapt, and build something truly great.

Discussion about this post

Read Entire Article