There's a term circulating in developer circles: vibe coding. The practice of letting AI write your code while you "vibe" — vaguely describing what you want and accepting whatever comes back if it seems to work.
The defenders call it democratization. The critics call it reckless. Both miss the deeper problem.
Vibe coding isn't just about code. It's the canary in the coal mine for something happening across every domain AI touches. Writing. Design. Analysis. Strategy. We're developing a new kind of literacy — the ability to produce expert-level output without expert-level understanding.
And nobody's asking whether that's a future we actually want.
The Calculator Problem
When calculators became ubiquitous, educators worried students would lose the ability to do math. They were half right. What students lost wasn't arithmetic — it was the feel for numbers. The intuition that tells you your answer is wrong before you check it. The sense of magnitude, proportion, relationship.
AI is the calculator problem at scale. Not just for math. For everything.
The issue isn't that AI-assisted work is bad. It's that the person doing the work can no longer distinguish good from bad. They've outsourced not just the execution but the judgment. They produce outputs they don't fully understand, can't effectively debug, and couldn't replicate if the tool disappeared.
This isn't competence. It's the simulation of competence. And the gap between the two is where disasters live.
The Dunning-Kruger Amplifier
The Dunning-Kruger effect describes how unskilled people suffer from illusory superiority — they don't know enough to recognize their own ignorance. AI doesn't just enable this; it actively manufactures it.
Here's the pattern: someone uses AI to generate something that looks professional. They receive praise or see apparent success. They conclude they possess the underlying skill. When the AI produces garbage — which it will, unpredictably — they lack the expertise to recognize it.
The result is people confidently shipping broken things. Not because they're lazy, but because they genuinely don't know what they don't know.
I've seen this in our own hiring. Candidates submit impressive portfolios — polished writing, clean code, thoughtful design. But ask them to explain their decisions and the facade crumbles. They can't articulate trade-offs because they never made them. They chose from options they didn't understand until something looked right.
This isn't their fault. It's the predictable outcome of tools that optimize for output quality over user understanding.
The Expertise Paradox
Real expertise has a distinctive texture. It's not just knowing solutions — it's recognizing the full shape of a problem. Seeing patterns across domains. Having calibrated intuition about what will break. Understanding not just that something works but why, and under what conditions it stops working.
This kind of knowledge is slow to acquire. It requires getting things wrong, sitting with confusion, developing the scar tissue that teaches you what to fear.
AI short-circuits this process. It offers the solution without the struggle, the answer without the question, the destination without the journey that makes the arrival meaningful. And because the output looks correct, there's no signal that you're missing something crucial.
The paradox: the better AI gets at helping beginners, the harder it becomes for them to become experts.
What We're Actually Losing
Critics of vibe coding focus on quality — buggy software, security vulnerabilities, technical debt. These are real concerns. But they're symptoms of something larger.
What we're losing is the capacity for independent judgment. The ability to stand in front of a problem without a tool and think it through. The confidence that comes from having built understanding from the ground up rather than assembled it from outputs you didn't create.
This matters beyond professional competence. It shapes how people approach their lives. Do they understand their finances, their relationships, their own patterns of thought? Or have they developed sophisticated ways of appearing to understand while outsourcing the actual thinking?
Agency requires understanding. Not perfect understanding — but enough to know when you're out of your depth, when to trust your judgment, when to seek help. AI-assisted work without underlying comprehension strips away that self-awareness.
The Craftsmanship Response
There's a counter-movement emerging. Not luddite rejection — that's neither possible nor desirable. Something more nuanced: intentional practice.
The recognition that some skills are worth developing the hard way. Not because difficulty is virtuous, but because the process of development builds capacities that can't be transferred any other way.
When we built Bifrost, we could have used AI to generate infinite lesson content. We tried it. The content was fine — grammatically correct, topically relevant, totally forgettable. What worked was the curation, the sequencing, the calibration of difficulty that kept learners in the zone of productive struggle.
The AI couldn't do that part because it required understanding the learner's mental model, not just their input history. It required the judgment that comes from having learned languages ourselves, having felt the specific confusion of grammar that breaks your native intuitions, having developed the taste to know when to push and when to hold space.
Tools amplify craft. They don't replace it.
The Path Forward
I'm not arguing against AI assistance. We use it constantly at Orochi. The question isn't whether to use these tools — it's how to use them without losing yourself in the process.
Some principles that guide us:
Understand before you outsource. Don't use AI to generate things you couldn't create yourself. Use it to accelerate things you already understand. The litmus test: could you explain this to someone else? Could you defend the choices? Could you fix it if it broke?
Preserve the struggle that shapes you. Not all difficulty should be eliminated. Some friction is the curriculum — the specific resistance that builds the skills you actually want. Be deliberate about which struggles you keep.
Value process over output. In a world where anyone can produce polished artifacts, the scarce resource is the thinking behind them. Optimize for understanding, not appearance.
Build the meta-skill of knowing your limits. The most dangerous person isn't the one who knows nothing — it's the one who knows enough to be confident but not enough to be right. Cultivate the humility to recognize when you're operating beyond your expertise.
The Uncomfortable Truth
There's no technological fix for this. Better AI won't solve the problem — it will deepen it. More guardrails won't help if people don't understand what they're guarding against.
The solution is cultural. A shift in what we value and celebrate. Away from the appearance of competence toward the reality of understanding. Away from output velocity toward judgment quality. Away from tools that do the thinking for us toward tools that help us think better.
This is harder than it sounds. The attention economy rewards visible output. Social media amplifies polished artifacts, not the messy process that created them. The incentive structure favors simulation over substance.
Changing that requires conscious choice. From individuals deciding what skills are worth developing the hard way. From companies building products that respect user growth, not just user engagement. From a culture that recognizes the difference between looking like an expert and being one.
Building for Understanding
At Orochi, this shapes how we approach every product. We're not trying to eliminate the need for human judgment. We're trying to sharpen it.
Lunora doesn't tell you what to feel — it helps you notice what you're feeling. Bifrost doesn't replace the work of learning — it makes the work more effective. Garnet and Emerald don't simulate connection — they create conditions for genuine connection to emerge.
The pattern: augment, don't replace. Preserve the human element that gives the activity meaning. Use AI for what it's good at — scale, pattern recognition, personalization — while keeping humans in the loop for what only they can provide: judgment, taste, values, genuine presence.
This is slower. It's harder to market. It doesn't produce the immediate dopamine hit of a tool that does everything for you.
But it produces something more valuable: people who actually understand what they're doing. Who can stand on their own when the tools fail. Who have developed the judgment to know when to use assistance and when to go deep.
That's the future we're building toward. Not one where AI makes humans obsolete, but one where AI helps humans become more fully themselves.
The alternative — a world of confident incompetence, where everyone produces and no one understands — isn't just risky. It's empty.
The tool is not the craft. The output is not the understanding. The appearance is not the reality.