Writings

AI transforms open-source.

Open source didn't just give us free software. It gave us the practices, the tooling, and the culture that modern engineering runs on.

Now AI is changing how open source itself gets built — and the consequences will reach further than most people expect.

  1. The Old World
  2. What's Actually Changing
  3. What This Breaks — and the New Safety Net
  4. From the Ground Up — A Builder's View
  5. The Bigger Picture

1. The Old World

◆ Open source was built on a simple idea: software anyone can read, improve, and trust.

What made it work wasn't the code — it was the people.

  • Pull requests as conversation, not just patches
  • Code reviews as the transfer of institutional knowledge
  • Maintainers: who say "yes", "no", "not yet", or "not like this"
  • Trust built slowly, over years, through reputation and consistency

◆ This model was slow by design. A PR sitting for two weeks wasn't dysfunction — it was deliberation. The async, human-paced rhythm of open source is what gave it credibility. Linux, PostgreSQL, Python — the projects that shaped the industry weren't fast. They were disciplined.

◆ Enterprise software followed this lead — more than most people realize.

The pull request model didn't start in corporate engineering. It came from open source and got adopted by teams who saw that forcing a conversation around every change made software better. Code review as a standard practice, not a gate — open source normalized that. The idea that a junior engineer's patch deserves the same scrutiny as a senior's, that no one merges their own work, that comments are a form of documentation — all of it traces back to how open source communities operated before it had a name.

The tooling followed too. Git was built for the Linux kernel. GitHub made it mainstream. CI pipelines, linting on every commit, automated test gates — practices that are now table stakes in any serious engineering organization were refined in public repositories long before they landed in enterprise toolchains.

Even the culture transferred. The best engineering organizations — the ones that ship reliable software and retain good engineers — tend to run like well-governed open source projects. Decision-making is documented. Disagreements happen in writing. Ownership is explicit. The maintainer model — one person or a small group with final say, accountable for the long-term health of a codebase — maps almost directly onto how good tech leads operate inside companies.

◆ That world still exists. But something new has entered the room.


2. What's Actually Changing

◆ The common narrative is "AI helps you write code faster." That's true, but it's the least interesting part.

What's really changing is the nature of participation.

◆ Agents don't just assist — they contribute. They open issues, write code, submit PRs, respond to review comments, and iterate. They work at a pace and volume no human can match.

◆ A few things that are already happening:

  • The pull prompt request — instead of a human writing a patch, a human writes a specification and an agent produces the implementation. The diff is real. The author is not.
  • CI/CD → continuous agent loops — pipelines that don't just test and deploy but actively refactor, optimize, and propose changes on a schedule. Ralph Loops run an AI coding agent repeatedly against a PRD until every item is complete — memory persists via git history, each iteration a fresh context. That's not a pipeline. That's an autonomous contributor.
  • Spec-driven development — the skill shifts from writing code to writing precise, unambiguous instructions that agents can execute correctly

This is not the future. Teams are doing this today.

And the tooling is maturing fast. Review quality across AI tools is now being benchmarked — precision, recall, real bugs caught on real codebases. The gap between the best and worst tools is significant.

The question is no longer "will AI change open source?" It already is. The question is what breaks when it does — and what we build to replace what's lost.


3. What This Breaks — and the New Safety Net

◆ Three things are under pressure:

Provenance. Who authored this? In a world of agent-generated code, the commit log stops being a reliable record of human intent. Attribution becomes ambiguous. When a vulnerability is found, tracing it back to a decision — and a person — gets harder. Open source has always run on trust. Traceable authorship is the foundation of that trust.

Maintainer burden. Review volume scales with agent output. A single developer with good prompts can generate ten PRs a day. Maintainers who were already stretched are now facing a wall of machine-authored diffs. Reading code you didn't write is hard. Reading code nobody wrote — in the human sense — is different again. The math doesn't work without help.

The trust collapse risk. The real danger isn't malicious code. It's the gradual erosion of the norm that someone, somewhere, actually read this before it shipped. That norm is what separates open source from chaos.

◆ So the question becomes: how do you maintain review quality when the volume of code outpaces the humans available to read it?

◆ We started measuring AI code review tools the same way we measure software — precision, recall, F1 scores — against 100 real PRs across eight production codebases: Redis, Tauri, Firefox iOS, cal.com, aspnetcore.

The results show a wide spread in how tools make trade-offs. Some tools optimize purely for precision — they only flag something when they're very confident. Sentry scores 85% precision but only 14% recall. It almost never cries wolf, but it misses most of what's actually broken. At the other end, Qodo scores best on F1 — the balanced measure of precision and recall — meaning it catches the most real issues without flooding maintainers with noise. Different tools, different philosophies, measurably different outcomes.

◆ The answer to "who is watching the agents?" is: better tooling, clearer norms, and humans who've made an explicit choice about what they must own.


4. From the Ground Up — A Builder's View

◆ I'll be specific, because I feel the abstract version of this conversation is less useful than the lived experience.

I build Tusknative PostgreSQL clients for macOS and GNOME. Zero telemetry, no Electron, built for the kind of developer who cares about the tools they use. It's a project I care about deeply and maintain personally.

When I started using AI in that workflow, two things happened.

◆ First, the barrier to entry collapsed. Features that would have taken me a week now take a day. Entire subsystems I'd have had to learn from scratch — platform APIs, native UI frameworks, packaging pipelines — became accessible because I could move fast through the unfamiliar parts and slow down where my judgment actually mattered.

◆ Second, I realized that speed is a double-edged sword. AI gives you an enhanced ability to go in the wrong direction very quickly. The F1 driver analogy: a faster car doesn't make you safer. It makes the cost of a mistake higher. Software engineering fundamentals — clear architecture, testability, knowing when not to build something — became more important, not less.

◆ The other thing I noticed: a lot of projects spring up. Most of them die. Not because the code was bad, but because nobody was committed to the long game. Staying power is the differentiator now. Anyone can ship v1. Maintaining something for five years, responding to issues, making considered decisions about what to accept — that's the moat.

AI lowered the floor. It didn't raise the ceiling. The ceiling is still human judgment, taste, and commitment.


5. The Bigger Picture

Open source has always been the place where software practices get invented before enterprise adopts them.

Continuous integration came from open source. Code review culture came from open source. The move toward shorter feedback loops, smaller PRs, documented decisions — all of it percolated from the open ecosystem into the enterprise before becoming standard.

AI-native development will follow the same path. What's being figured out right now in open source — how to maintain trust at scale, how to audit agent behavior, how to define what humans must own — will become the template for how large engineering organizations run in five years.

◆ That means the developer identity question is urgent, not theoretical.

The shift is from coder to supervisor. Not in a management sense — in an engineering sense. The job is to define intent clearly, review outputs critically, own the decision about what ships, and stay accountable for what the system does. That requires taste, judgment, and a deep understanding of the problem — things that are not in the model.

This is not a threat. It's a reframe.

◆ The engineers who thrive in this world are the ones who stop measuring their value in lines of code and start measuring it in the quality of the systems they're responsible for. Open source showed us that the best software comes from people who care about it over the long run.

That hasn't changed. If anything, it matters more now.


Talk given at VoxxedDays Amsterdam. Slides and references available at srirangan.net.