Skip to content
February 14, 202612 min readengineering

Stop Calling It Vibe Coding: What AI-Assisted Development Actually Requires

The discourse around AI coding is dominated by two wrong camps. One says AI can't build anything real. The other says anyone can ship enterprise software overnight. Both are wrong. Here's what it actually takes.

aiclaude-codesoftware-developmentvibe-codingengineering
Stop Calling It Vibe Coding: What AI-Assisted Development Actually Requires

TL;DR

"Vibe coding" is a reductive term that misrepresents how AI-assisted development actually works. The anti-AI camp claims you need a CS degree and 10 years of production experience to ship anything real. The AI evangelists claim their 10,000-agent swarm builds enterprise software overnight. Both are wrong. AI-assisted development is a discipline that amplifies existing competence ... it does not replace it. The differentiator is not the tool. It is the person directing it.


The Term Nobody Asked For

"Vibe coding" needs to die.

Not because AI-assisted development is not real ... it is. I use it every day. But because the term reduces a legitimate engineering discipline to something that sounds like you are closing your eyes and hoping the code works. It implies the absence of skill, intent, and rigor. It suggests that the human in the loop is optional.

The term has become a weapon for both sides of a debate that is mostly fiction. One side uses it to dismiss AI-generated code as inherently unshippable. The other side uses it to sell the fantasy that anyone with a credit card and an API key can replace a development team overnight.

Neither is true. The reality is more interesting and more demanding than either camp admits.


The Anti-AI Gatekeepers

There is a vocal contingent of experienced developers who insist that AI-assisted development cannot produce production-grade software. Their argument usually comes down to this: AI generates plausible-looking code that falls apart under real-world conditions, and only someone with years of traditional development experience can catch the failures before they reach production.

They are half right. AI does generate plausible-looking code that can fall apart. Context windows degrade over long sessions. Hallucinations happen. Generated code can introduce subtle security vulnerabilities, miss edge cases, or implement patterns that work in isolation but create maintenance nightmares at scale.

Where they are wrong is the conclusion. The implication that AI-assisted development is fundamentally incapable of producing professional-grade systems is contradicted by the evidence. Not theoretical evidence. Actual shipped systems.

I built and operate a production content platform that includes a Next.js 15 application on Cloudflare's edge network, a self-hosted social media scheduling system with triple-layer safety nets against data loss, enterprise-grade monitoring with four independent health check systems running on staggered intervals, automated content pipelines processing 158 queued posts across three platforms, a newsletter system with 36 pre-written issues, and infrastructure hardened against the specific failure modes I discovered through operating it. Every line of code, every systemd timer, every database query pattern was built with AI assistance.

The monitoring system alone has a postiz-safe-start script that runs as a pre-start check before the scheduling engine boots, preventing a Temporal workflow replay from firing past-due posts. A watchdog runs every 5 minutes checking HTTP health, container states, Redis connectivity, database integrity, and LinkedIn token expiry. A comprehensive monitor runs every 30 minutes with log rotation, disk space alerts, and duplicate schedule detection. These are not demo features. They exist because I encountered the failure modes they prevent and built the defenses against them.

None of that happened because I typed a single prompt and let the AI run. It happened because I brought 24 years of context to the conversation.


What I Actually Bring to the Table

I wrote my first website in 2002. I was 14, using the earliest version of Dreamweaver, hand-editing HTML before CSS was standard practice. That was years before anyone imagined AI writing code.

I am not a formally trained software engineer. I do not have a computer science degree. I have never worked as a full-time developer at a FAANG company. What I have is over two decades of building things on the web ... learning how servers work, how databases fail, how architectures scale and where they break, how teams ship code and why they ship bugs. I learned by doing. I learned by breaking things and fixing them. I learned by working with teams that did it right and teams that did it catastrophically wrong.

That background is the differentiator. Not the AI.

When I sit down with Claude Code, I am not asking it to think for me. I am directing an engineering effort. I know enough about web architectures, frameworks, and infrastructure patterns to evaluate whether the code it generates is correct, appropriate, and maintainable. I know when a database query is going to cause contention at scale. I know when an authentication pattern has a security gap. I know when a caching strategy is going to create stale data problems. I do not always know the exact implementation ... but I know the right questions to ask, and I know when an answer does not smell right.

That is not vibe coding. That is engineering with a different set of tools.


The Shift From Coder to Architect

AI did not make me a developer. It changed what kind of developer I am.

Before AI-assisted tooling, I spent most of my time in the terminal trenches. Writing code by hand, debugging line by line, dealing with the mechanical work of turning architectural decisions into running software. The ratio was something like 20% thinking about the right solution and 80% implementing it.

AI inverted that ratio.

Now I spend most of my time on the work that actually determines whether a project succeeds: researching the right approach before writing a single line, evaluating architectural tradeoffs, making decisions about infrastructure, security, and scalability that will compound over the life of the system. The implementation happens faster, but the decisions that guide it take the same amount of thought they always did.

This is the part the "AI will replace developers" crowd does not understand. The bottleneck was never typing speed. The bottleneck is knowing what to build, why, and how to make it survive contact with real users.

I have worked with CTOs who ship bugs for the sake of shipping fast. Teams where quality only matters when it shows up on a performance review. Organizations where the definition of "done" is "it compiles and the demo works." The technology was never the problem in those environments. Leadership was. Decision-making was. Having the discipline to stop and research before building was.

AI does not fix any of that. If anything, it makes bad habits worse. Faster implementation with bad direction means you arrive at the wrong destination sooner.


The AI Fantasy Camp

On the other end of the spectrum are the AI evangelists who claim they have built an army of autonomous agents that develop, test, deploy, and market software while they sleep.

I have researched these systems extensively. Multi-agent frameworks, autonomous coding pipelines, n8n workflows with chains of LLM calls, systems that supposedly orchestrate thousands of agents to handle everything from code generation to customer support.

The consensus from that research: it is possible in theory and largely fiction in practice at the scale these people claim.

The operational costs alone make the math collapse. Running Claude or GPT-4 class models through complex multi-agent pipelines with meaningful context at each step costs real money. Not hobbyist money. Enterprise budget money. The kid on LinkedIn showing a video of their agent system "cranking out production apps in a day" is either running a demo that will not survive its first real user, or spending thousands of dollars per month on API calls they are not disclosing.

Then there are the technical limitations that compound at scale. Context degradation across long agent chains. Hallucinations that cascade when one agent's bad output becomes another agent's input. Security vulnerabilities that nobody reviews because the whole point was to remove humans from the loop. Error handling that works in the demo and breaks under any condition the training data did not anticipate.

I have seen the output from these systems. The code compiles. The demo works. The architecture falls apart the first time you need to debug a production incident at 2 AM and realize no human actually understands how the system works because no human actually built it.

That is not engineering. That is an expensive random code generator with good marketing.


The Middle Ground Nobody Talks About

Here is what AI-assisted development actually looks like when it works:

Before every task, I research. Not a quick search. Thorough, targeted research into the specific technologies, patterns, and tradeoffs relevant to the problem. I use Claude to conduct that research, cross-reference it with other sources, and synthesize it into a decision. The AI is faster at gathering information than I am. I am better at evaluating which information matters.

I maintain comprehensive documentation. My project has a master reference document, a voice guide, an editorial calendar, deployment rules, content operations workflows, and modular configuration files. This documentation is not decoration. It is the context that makes AI assistance effective. Without it, every session starts from zero. With it, the AI has the same institutional knowledge a senior team member would.

I ask the right questions at the right time. Before implementing a monitoring system, I asked about Temporal workflow replay behavior and learned that changing a publish date in the database does not prevent Temporal from firing the scheduled action. That one question prevented a data loss scenario that would have gone undetected until posts started publishing at wrong times. The AI provided the answer. I knew to ask the question.

I catch mistakes. AI makes errors. Context windows degrade. Sessions that run too long produce lower quality output. I have learned the patterns ... when to clear the session, when to compact, when to start fresh. I have caught database table name casing issues that would have caused silent query failures. I have caught bash quoting bugs in monitoring scripts. I have caught networking configuration issues that would have left services inaccessible. The AI wrote the code. I verified it worked.

I iterate toward quality. My definition of an MVP is probably closer to what most teams consider production-ready. I do not ship the first thing that works. I harden it. I add monitoring. I add safety nets. I test failure modes. That discipline existed before AI and it exists because of how I approach building software. AI makes the iteration faster. It does not make it optional.


The Formula

If there is a formula for making AI-assisted development work, it is this:

Competence + Discipline + Research + AI = Results

Remove competence and you get demo-quality code that breaks in production. Remove discipline and you get technical debt that compounds faster because AI helps you write more of it. Remove research and you get confident implementations of the wrong solution. Remove AI and you get the same results, slower.

The order matters. AI is the multiplier at the end, not the foundation. Every attempt I have seen to put AI first ... to use it as a replacement for understanding rather than an amplifier of it ... produces the same result: something that looks impressive until it has to work.


Who This Actually Works For

AI-assisted development is not for everyone, and pretending otherwise does a disservice to the people who try it and fail.

It works for people who already understand enough about software systems to evaluate output quality. You do not need a CS degree. You do not need 10 years at Google. But you need to know what good looks like. You need to have built enough things to recognize when an architecture will scale and when it will collapse. You need enough pattern recognition to catch the 5% of AI output that is subtly wrong while efficiently using the 95% that is correct.

It works for people with the discipline to research before implementing, document as they go, and verify after deploying. If your workflow is "prompt, accept, ship," you will produce garbage quickly. If your workflow is "research, prompt, evaluate, test, iterate, document," you will produce professional-grade software at a pace that was not possible five years ago.

It does not work for people who want to skip the learning entirely. AI is not a shortcut around understanding. It is a force multiplier for understanding you already have. A person with zero development background will produce better results with AI than without it, but they will not produce professional results until they develop the judgment to evaluate what the AI gives them.


The Real Threat to Professional Developers

The developers most threatened by AI are not the ones who think they are. Senior engineers with deep architectural knowledge and strong judgment are more valuable than ever because those are exactly the skills that make AI assistance effective.

The developers at risk are the ones whose primary value is implementation speed. The ones who could always write code fast but never invested in understanding why systems fail, how architectures evolve, or what users actually need. AI has commoditized fast implementation. It has not commoditized good judgment.

The other group at risk is the one that refuses to adapt on principle. "I do not use AI because real developers do not need it" is the same energy as "I do not use an IDE because real developers use vim" or "I do not write tests because real developers write correct code the first time." It is not a professional standard. It is pride dressed up as principle.

The tool changed. The craft did not.


The Bottom Line

AI-assisted development is real, it works, and it produces professional-grade results when directed by someone with the competence to direct it.

It is not magic. It is not a replacement for skill. It is not going to let someone with no development background build enterprise software by typing a prompt. And it is definitely not a fleet of autonomous agents that builds and ships products while you sleep.

It is a tool. The most powerful development tool that has existed in my 24 years of building for the web. But like every tool before it, the results depend entirely on the person using it.

Stop calling it vibe coding. Start treating it as what it is: a new discipline within software engineering that rewards the same qualities that have always separated good engineers from bad ones ... judgment, discipline, and the willingness to understand what you are building before you build it.

Get insights like this weekly

Join The Architect's Brief — one actionable insight every Tuesday.

Need help with AI-assisted development?

Let's talk strategy