niceideas.ch
Technological Thoughts by Jerome Kehrli

Entries tagged [coding]

My take on Vibe-Coding

by Jerome Kehrli


Posted on Thursday Apr 09, 2026 at 12:13PM in Computer Science


Following this post on linkedin, I thought that I could share one of my own experiences.

A fairly senior developer had an idea for a new product and spent a couple of weeks vibe-coding it. Hundreds of prompts, lots of iterative .md plans and in the end more than half a hundred thousands of lines of code produced in a surprisingly short time.

When the project reached (what was deemed) a first "complete" state, it was handed over to me to take care of the remaining steps, mostly industrialization concerns: CI/CD pipeline and deployment automation, database migrations, security review, that kind of things.

At first glance, I was honestly impressed. That was really a lot of work delivered very quickly.

But as an architect, I tend to look at things through a slightly different lens: overarching structure, code quality, consistency, maintainability, testability... all the boring but essential stuff.
So before jumping in, I spent some time trying to understand where things stood on those aspects.

Well I guess I got a good grasp on the feelings Pandora must have experienced when opening her famous box.

While there were a few good ideas in there and clearly a lot of energy put into it, it was first and foremost lacking consistency all over the place: different architectural patterns used in parallel, some partially applied, some overlapping. Seldom parts of the codebase felt somewhat solid, but most of it was lacking even a minimal design. There was some dead code, quite a few tests but most that didn't pass, and overall again a lack of a clear, unifying structure.

It was the most chaotic codebase I've ever looked at.

The result is that what took roughly two weeks to build would now require a significant amount of time to consolidate, refactor and bring to a proper production-ready state, mostly because it wasn't shaped progressively along the way. I estimated it to 3 months, minimum.

So the "2 weeks" is nothing more than an illusion.

To be fair, even with that, there's still a net gain. If I compare it to building everything from scratch without any AI assistance, I'd probably estimate something like 7–8 months. So ending up around 3–4 months total is still a very decent improvement.

But the second phase, the cleanup and consolidation, is where things can get really tricky and cause quite a bit of a headache if everything is deferred until the end.

The key takeaway

The real leverage with these tools isn't just speed. It's how one uses that speed.

If the architectural thinking, refactoring and validation are done continuously during the process, not postponed, then the outcome is very different. One can still move quite fast, but one also keeps the system coherent as it grows.

In that scenario, I'm pretty convinced the whole thing could have landed in a solid production-ready state in maybe a couple of months, without the heavy stabilization phase afterward.

So the issue isn't vibe-coding itself. It's doing it blindly for too long without stepping back, reviewing and reshaping.

There's a whole world of a difference between "code that works" and "code that's ready to live in production."


How I personally use AI

For me, AI is a coding assistant. A very powerful one, but still really just an assistant.

I use it for things like these (among others):

  • writing utility functions I don't want to spend time on
  • helping with large refactorings
  • generating straightforward unit tests
  • wiring things end-to-end (API - service - DB - migration script)
  • quickly prototyping ideas

Basically, all the repetitive or low-value parts where it's amazingly helpful.

And every time it produces code, I go through it carefully. Line by line when needed. And quite frankly, I do have very frequently things to say on these lines.

But then where it really shines as well is outside of pure coding:

  • challenging design ideas
  • helping explore (and even deploy!) unfamiliar tech
  • debugging tricky issues
  • spotting potential performance hotspots or security concerns
  • etc.

Sometimes it misses obvious things and that's fine. But sometimes it surfaces insights surprisingly fast, and that's where it really feels like a force multiplier.

At the end of the day, tools like Codex or Claude Code feel like superpowers.
But they're MY superpowers, not autonomous developers.


So how much faster in the end?

I'd say that my extensive use of both Claude Code and Codex (they each have their own strengths) gives me roughly a 4x boost in my overall TECHNICAL productivity.
(They definitely don't help with slides though. Forget about that. It's rubbish.)

And that kind of gain is amazing. A real game changer.