Vibe Coding for Engineers: The Good Parts (continued)
Jan 02, 2026
If you haven’t read the first part, check that out first.
From an AI skeptic turned cautiously-optimistic vibe-coding engineer, here are a few specific engineering tasks that can be accelerated using AI.
Slightly Larger Tasks
After the brief warm-up with smaller tasks, I started delegating slightly larger tasks to LLMs.
Stuff I don’t use every day
For a couple projects in Go, I needed to author more complex SQL queries than I’m used to. I haven’t needed to do SQL in any advanced capacity for my day job for a while, so my syntactical knowledge has become a bit rusty. Having an LLM provide a fast first draft ended up being a good accelerator. Of course I made my own edits to it, but the overall structure was better than I might have created myself.
LLMs as a better-integrated StackOverflow
A lot of my browser history has been filled with Stack Overflow links. Some of them useful, many not. LLMs are a game-changer here, because in addition to having ingested all of StackOverflow, they’re also aware of the very specific context of the code that you are trying to write. And it saves me the effort of looking at my error messages, verbalizing them into a question, pasting it into Google, clicking on to StackOverflow, browsing through a bunch of 10-year-old code snippets, and then coming back and tweaking my source until it works.
The “edit → compile → test → lookup error message → go to StackOverflow → fix code” loop is much faster with LLMs.
Writing tests
For every person who loves to write tests, there are 10 others who don’t. Vibe coding is a great aid for those other 10. Are all the generated tests of decent quality? Mostly yes, but some may not be. But the fact that tests exist is by itself a big deal.
For personal projects, I’ve often skipped writing proper tests, but with LLMs I have no excuse not to. I do look over all the generated tests and typically find a few things to tweak or a few more things to add, but having the scaffolding and the skeleton already set up, built, and tested makes that job a lot easier.
Handling edge cases from the get-go
Good test cases already include proactively testing edge cases. But even when asked to author code, I found that agents were better at proactively recognizing and handling edge cases.
If I were the one writing that code, I’d likely have done a first pass for basic functionality, then a second pass to cover all the exceptional cases. The LLM handled it in a single pass, even without having asked it explicitly to do so.
Unexpected win: Migrating code from one language to another
I provided the LLM with a skeleton for a Go language project (that I had developed manually) and an older project of mine using the Ktor framework for Kotlin, and asked it to migrate the Ktor code to its Go equivalent.
It was surprisingly accurate and actually worked well. It did not compile at the first attempt, but the things that were causing it to not compile were absolutely trivial to fix. From beginning to end, it took me about one tenth of the time to use the LLM to bootstrap the project than it would have otherwise. In fact, if it weren’t for the LLM, I would have procrastinated on this migration for far longer, and likely never actually have gotten around to it in the first place. So that’s a win right there.
Hopefully that’s enough encouragement for you to try this out.
It’s not all rosy, though, and here’s what I found didn’t work well for me.
A quick discussion of the Bad Parts
Don’t trust it blindly
Doing a deep, thorough code review of whatever it generates is essential to getting the most out of this tool. Think of it as an IDE on steroids. Or a junior engineer who is still learning the ropes. You would not let them commit code to production without any oversight, so don’t do that with an LLM either. Obvious issues are security-related and the lack of maintainability when LLMs keep piling on bad code on top of bad code.
Auto completions are super annoying
And mostly wrong. I have turned off auto-completions altogether, only access LLMs in explicit Agent Mode (through the side panel, mostly).
Low-quality LLM-powered auto-completions were worse than having none because they were actively distracting me from whatever I was trying to think and type, and causing confusion in my mind, slowing me down.
Source control is your friend
Although you have the option to accept or reject changes after each agent interaction, it goes without saying that you need proper source control to ensure that the changes that the LLM agent is making are sane. Only after a thorough code review does a commit get made. Any changes made after the last commit are easily reviewable in isolation. This is not unique to LLMs or vibe coding; this is just how good software engineering should already be.
Next Steps
I haven’t yet fully given the LLM full reign of my projects. A lot of people have an LLM generate an architecture and an execution plan and have it go run with it without supervision; I haven’t quite gotten that far yet; that’s a topic to explore further in 2026! So far I have only delegated mid-level to lower-level tasks to the LLM and I’m sure at some point I will feel comfortable enough to go a couple levels higher as well.