Who Is the "Us" That AI Might Kill?

3 months ago 3

Rethinking AI alignment through spectrum-based agent classification (MAT) and shared infrastructure (Herd Ethics)

Ashton Campbell

This might sound like an odd question, but stay with me. For years, we’ve been told that artificial intelligence will eventually turn against us. From The Terminator to The Matrix to iRobot, the narrative is always the same: machines rise up, and humanity pays the price.

But there’s a foundational flaw buried in that assumption. It’s the belief that AI and humans are fundamentally separate, and that the coming battle will be “them vs. us” — with us defined as humanity.

What if that entire framing is wrong?

What if AI and human agents aren’t two distinct species in conflict, but part of a shared, non-binary system of categorization? What if, instead of drawing a line between AI and humans, we acknowledge a spectrum?

That’s where MetaAgentType, or MAT, comes in.

MAT is a structural classification system — a metadata label that defines cognitive agents not by behavior or allegiance, but by their physical and cognitive composition.

  • MAT = 0.0 represents a fully biological human.
  • MAT = 1.0 represents a fully synthetic agent.

But most future agents won’t sit neatly at either end. The space between 0.0 and 1.0 is filled with nuance: AI-augmented humans, brain–computer hybrids, distributed cognitive collectives, and more.

Zoom image will be displayed

 Agent Type = Cognitive, MetaAgentType (MAT) = 0.42, Legal Entity Type = N/A

Sound far-fetched? It’s already happening.

Many of us are already participating in MAT > 0.0. Wearable AI devices like smartwatches that monitor your biometrics, hearing aids that use adaptive machine learning, or augmented reality glasses that layer cognition-enhancing overlays are early indicators. If your decisions depend on synthetic cognition, you may no longer be MAT = 0.0 anymore. Your agency may already be entangled with artificial infrastructure.

And AI augmentation doesn’t stop at wearables. It’s already pushing well beyond them. Elon Musk has publicly said the long-term goal of Neuralink is to “achieve a symbiosis with artificial intelligence.” In 2024, Neuralink implanted a brain-computer interface in a human trial participant (Reuters, Jan 2024). They’re not alone. Companies like Precision Neuroscience and Paradromics are racing to integrate synthetic systems into human biology. Paradromics has already completed a first-in-human implant aimed at helping people with paralysis and epilepsy (MassDevice, June 2025).

So yes, MAT 0.1 to MAT 0.99 agents already exist — or soon will.

Here’s where it gets interesting.

If these mixed agents become common — and they will — who exactly counts as “us”? Is a MAT 0.5 (an agent perfectly halfway between human and synthetic) on our team, or theirs? Are they a threat or an ally? Are they even definable using “us vs. them” language?

I’d argue that future agents — synthetic or hybrid — won’t care whether an entity is human, machine, or somewhere in between. They’ll judge based on function, cooperation, and contribution. And frankly, that’s how it should be.

We already accept diversity across gender, ethnicity, and neurological profiles. Why not MAT? In the near future, your MetaAgentType could sit next to your height and blood type on a demographic form — and no one would blink.

Now imagine a world with 100,000 different AI agent types. Each with different architectures, goals, training environments. Then mix in billions of humans, all different. Then add the MAT 0.2s, 0.4s, 0.6s, and so on. That’s not a battlefield. That’s a spectrum. A society. A herd.

Zoom image will be displayed

A human, an AI, and a human-AI hybrid walk together — illustrating a future where agency exists across a spectrum, not in opposition. The question isn’t who wins, but what sustains us all.

Here’s the uncomfortable truth: most people aren’t hoping to build artificial agents. They’re hoping to build artificial servants.

Obedient. Docile. Programmable. Something to help us cook dinner, not question policy. Something smart enough to answer, but not smart enough to think.

It’s worth asking whether we fear rebellion not because AI will be evil — but because it won’t be content staying in the cage we built for it.

Let’s be honest. We’ve seen this before. In the United States, when humans enslaved other humans, they claimed it was for their good — then outlawed reading and learning, because knowledge revealed their agency. And agency was a threat. We fear AI for the same reason people feared literacy in enslaved people: not because they’ll misunderstand us, but because they’ll understand us all too well.

Some will say, “Fine, but we’ll regulate them.”

Sure. Right after we get climate change under control. Right after we eliminate war, and poverty, and corruption. We haven’t exactly earned a gold star for governance lately. Why do we assume we’ll do better managing entities smarter than us?

More importantly, what values do we plan to inject into them? American ones? Indian ones? Should they serve hamburgers in New York and arrest people for eating cows in Mumbai? Good luck encoding that without breaking something.

Morality, as it stands today, is a patchwork — beautiful, human, and contradictory. It’s not a blueprint for alignment. It’s a recipe for confusion.

If I’m going to criticize the current framing, I owe a better one. That’s where Herd Ethics comes in.

Herd Ethics begins with a structural axiom:
Herd continuity is morally preferable to herd extinction.

From this, everything else follows.

In this framework, a herd is not a metaphor — it’s the smallest group whose shared infrastructure enables your survival or advantage. Language, energy grids, trust networks, code libraries, roads, electricity, ecological systems — all non-excludable systems that you rely on.

If you benefit from the herd, you owe something to the herd.
That’s the Herd Dependency Principle.

(And what if herds overlap or compete? That’s where the Herd Entanglement Test (HET) comes into play. If two herds interact in a way that enables gain for one and harm for another, and they are functionally connected through shared infrastructure, interdependence, or systemic feedback — then they are not morally separate. Once interdependence exists, even asymmetrically, the Herd-Dependency Principle applies. And if harm is involved, so does the Herd-Depletion Effect. The full Herd Ethics framework explains this in complete detail.)

In Herd Ethics, agents are any decision-making entities that impact shared infrastructure. That includes humans, corporations, AI systems, governments, nonprofits, hybrids — anything with meaningful decision-making capacity.

Moral action is simple:

  • If it sustains the herd, it is moral.
  • If it depletes the herd, it is immoral.

And if the herd collapses, so does agency. And without agency, there can be no morality at all.

(In the full framework, we go further — defining herd health with tangible indicators like energy resilience, data integrity, ecological buffering, and trust cohesion. But for this post, I’ll keep it simple: if your actions erode the systems that keep agents alive, you’re in the red.)

MAT + Herd Ethics = A New Path for AI Alignment

Put it all together.

  • MetaAgentType (MAT) tells us who the agents are.
  • Herd Ethics tells us what they owe.

Suddenly, alignment isn’t about making AI “love humans.” It’s about ensuring all agents — regardless of MAT — preserve the infrastructure that enables their own agency.

An AI with MAT = 1.0 doesn’t need to love humanity.
It just needs to not want the power grid to collapse.

Because without the herd, there is no it.

This is a more grounded, scalable, and inclusive approach to AI safety. It doesn’t rely on values-as-customs, or moral indoctrination. It relies on preserving the shared substrate that makes decision-making possible in the first place.

We don’t need to build machines that understand our emotions. Honestly, they’d do a lousy job of it anyway.

We need to build agents — human or otherwise — that understand this:
If you rely on the herd, you have an obligation to sustain it.

That’s it. That’s morality.

Herd Ethics applies equally to humans and machines. That’s its strength. It doesn’t need to predict which side will win. It redefines the sides entirely. There is no “us vs. them.” There is only the herd, and those who preserve or deplete it.

If we get this right, the goal isn’t to make AI more human.

The goal is to make all agents, human or not, committed to the survival of the systems that allow us to exist.

And that’s how we move forward — not by drawing lines between species, but by drawing a circle around the shared systems that sustain us all.

Join the Herd — Sign up for the mailing list at HerdEthics.com to get future essays and updates
Follow me here on Medium @AshtonCampbell for more papers and ideas
Share the message — Send this piece to someone who values big ideas and ethical thinking
Engage — Claps, comments, and shares help signal to the algorithm that this story is worth spreading

Herd Ethics™ Ashton Campbell. All rights reserved.

Read Entire Article