How should open source software projects handle AI‑generated code?

2 hours ago 2

The excellent KeePassXC, an offline, open‑source password manager, is at the center of a controversy over the use of AI‑generated code after the project’s collaboration policy and README added this paragraph:

Generative AI is fast becoming a first-party feature in most development environments, including GitHub itself. If the majority of a code submission is made using Generative AI (e.g., agent-based or vibe coding) then we will document that in the pull request. All code submissions go through a rigorous review process regardless of the development workflow or submitter.

Users and critics backlash was so intense that on Sunday (9the) one of the project maintainers, Janek Bevendorff, published a post on the official blog detailing their stance on AI‑generated code.

Worth highlighting from the post:

  1. KeePassXC does not have — and will not have — AI features.
  2. All PRs are reviewed by a human before being accepted (or rejected).
  3. Generative AI has been used for “creating pull requests that solve simple and focused issues, add boilerplate code and test cases.”

Regardless of what you or I think of AI‑generated code, its quality, or ethical concerns about its use, I believe KeePassXC’s position is correct and the flood of criticism, unfair.

First, because it would be impossible to guarantee rejecting all AI‑generated code. Tools like GitHub Copilot don’t leave a “signature” when used; developers sometimes mix machine‑generated code with code they wrote themselves.

“We’d rather have them transparently disclose the use of AI than hide it and submit the code against our terms,” Janek wrote.

Second, because the premise that AI‑generated code inherently threatens software integrity doesn’t hold up. Yes, AI‑generated code can be poor, just as code by inexperienced programmers or malicious actors can be poor or, worse, harmful. It’s easier to sabotage an open‑source project as a human than with the help of an AI. Just ask the recent npm supply‑chain attacks on popular packages, which show human‑led sabotage is real.

The only real risk would be a deluge of poor AI‑generated submissions that overwhelm KeePassXC’s five maintainers and create development bottlenecks. There are documented cases of this — perhaps most notably the cURL incident. Janek says this hasn’t happened (yet?) with KeePassXC and that “we will adjust our policies and methods as the need arises.”

Unless someone is willing to store all their passwords in a single *.txt file or an encrypted blob, a zero‑tolerance policy toward software containing AI‑generated code is unlikely to hold. KeePassXC deserves credit for bringing transparency to the issue. Others may not; with closed/proprietary software you can’t even inspect the code or development practices.

Questioning is legitimate — and, in fact, healthy. Janek agrees, closing the KeePassXC blog post with these words:

So please, be skeptical of AI. But also be skeptical of human strangers as we are to you. If our AI policy toppled your trust in us, ask yourself why you trusted us (or anyone) in the first place. You don’t know us, you trust our reputation, and we earned that by building a stable product, which we will continue to do. You have our full commitment that we will not integrate any AI features into KeePassXC, and we will not merge any code (human or AI) without tests and thorough review. We have high standards; please continue holding us to them, but let’s have a rational and informed conversation.

Read Entire Article