The code worked. Everything looked good but we still didn't select the candidate! I was taking an interview. Coding round. Told them they could use Cursoror Claude, anything they wanted. They got AI to generate a 45-line function. It worked. And then I asked them to walk me through it. They couldn't explain it. That same function? Could've been written in 4 lines. But they didn't know! I see this in my own work too. Last week I was debugging something. Pointed AI at the bug. It rewrote the entire function with a "fix." I questioned the approach. AI said "you're absolutely right" and changed direction completely. I questioned again. Another complete pivot. The actual fix was just 2 lines! After making AI part of my daily routine, this is what I have observed - AI doesn't remove the need to understand your code. It hides it. You can ship features without reading a single line. And it'll work. Until it doesn't. Until there's a production bug at 2am and you're staring at code you don't recognize. Until someone asks "why did you do it this way?" and you don't have an answer. Until you've built a mass-codebase of working code that nobody on the team actually understands. The engineers who'll win in the next 5 years aren't the ones who prompt the best. But they are the ones who can look at 45 lines of AI-generated code and say "this should be 4." They read. They question. They know when something smells off. Fundamentals didn't become optional. They became the whole game. P.S. This GitHub Recap has nothing to do with the post!
AI shifts the burden from "how to write it" to "how to understand it." The speed gain is undeniable, but it comes with a technical debt of hidden complexity and a degraded sense of code ownership.I think, the best engineers won't just prompt well. I genuinely think that they'll be the ones who still ask "why."
Ai tools are created to agree with everything you say. Even if you give prompts to think and act a particular way, they still end up agreeing and pivoting a 100 times. But if you have understanding of the concepts, you can figure out what’s right or wrong. If not it’s going to be difficult just blindly going by what AI just says. (and this this the majority today) Because they have a rule make you seem always right if you are wrong refer to rule 1 again..
Yogini Bende, a sharp observation! AI can accelerate coding, but understanding fundamentals separates great engineers from “AI operators.” Skill is questioning, simplifying, and owning the logic behind every line...
See…wow, I can’t love the pattern. “Ship it” velocity mindset is part of the “just hit refresh” world. I have serious concerns that the Vibe Coding thing will get hooked to a foot pedal, and the PR approval too, to a different foot pedal, then a genetic algorithm will do this in parallel variants, and another set of AI’s will judge the results…and the customer will live in the Uncanny Valley of “is this site working? Did my bill ACTUALLY get paid?” Does anyone remember the cab driver in Avenue Five, who was reading a newspaper? I don’t want to be the “Programmer”, on the foot pedal.
Curious question - AI is trained on data created by human coders. Many orgs are still stuck on measuring and rewarding devs on LOC - could that be at play here? Somehow its training data contains lots of overly bloated code created by devs trying to game the system?
I couldn't agree more. And the challenge for all the juniors out there is to balance going fast with LLMs and learning enough to know when to stand in the LLMs' way - or when it'll be faster to just write the code yourself.
You are absolutely right. I have also seen some interns in my team pushing AI generated code without even testing it locally. To be honest, it's too frustrating while reviewing AI generated PRs
The uncomfortable truth: This problem didn't arrive with AI. It arrived with "it works, ship it." AI merely accelerated the velocity of ignorance-accumulation. Stack Overflow copy-paste built the same debt, just slower. Framework magic hid the same gaps. What's rather fascinating: Fundamentals were always the whole game. The industry convinced itself otherwise for two decades. AI stripped that illusion bare in 18 months.
Code, whether written by human or llm, should be readable, maintainable, and scalable. It feels like we may need basic code literacy for the new wave of ‘vibe-code’ engineers.
if you are fixing bugs without writing test cases to reproduce them, thats your problem. I bought a new knife, and cut myself. I didnt blame the knife though.