Reflections on Code Generation, Understanding and the LinkedIn hype machine
A few days ago, I shared an idle thought on LinkedIn about software engineers and AI tooling. I suggested that whilst LLMs might make us faster typists, they don't address the real bottleneck in software development: the time it takes to understand problems and their solutions. The response was something I haven't experienced on LinkedIn before. There were 300+ comments where (mostly) thoughtful engineers weighed in with their experiences, concerns, and insights.
You can see the original post here: https://www.linkedin.com/posts/jackson-bates_software-engineers-arent-slowed-down-by-activity-7331804344041848832-H9YM
I want to especially call out that my post uses the term LLM (Large Language Models) exclusively to discuss this issue, as did all of my replies to others in the comments. But since so many of the commenters fall back to calling this 'AI', I am adopting a similar looseness throughout this blog post. But for the record, I do not think of LLMs as AI.
What struck me most wasn't the predictable divide between AI enthusiasts and skeptics, but the nuanced middle ground that emerged. Many engineers are wrestling with how to integrate these new tools responsibly whilst maintaining the craft and rigour that defines good software development.
The Boilerplate Consensus #
Perhaps the strongest theme was agreement that LLMs excel at eliminating the tedium of boilerplate code and repetitive tasks. Sean Curtis captured this well: "I'll never code without AI again. I love it for refactoring code when getting breaking changes, for writing build tools that take a complex JSON file and output a simpler one."
I don't think this is controversial. Few would argue that manually typing out standard CRUD operations or configuration files is where we add our greatest value. What's more interesting is how engineers are using this freed-up time. Some focus more on business logic, others on system design. The purported productivity gain from most isn't just about speed; for them it's about cognitive load.
The "Sparring Partner" Phenomenon #
One of the most compelling perspectives, at least from my mostly skeptical starting position, came from engineers who described using LLMs as intellectual sparring partners rather than code generators. Mo Kargas articulated this succinctly: "I have some models set up to argue merits and pros and cons with me (like you would pair programming with another senior). This often helps zero in on optimal solutions."
This use case intrigued me. Rather than asking AI to solve problems, these engineers are using it to challenge their own thinking, explore alternative approaches, and stress-test their assumptions. It's collaborative rather than delegative (is that a word?). The human remains firmly in control of the architectural decisions.
I still keep in the back of my mind the reality that these models are not thinking, comprehending, or genuinely discussing. An easy thing to forget when an LLM can reply quite eloquently. With that lens, I find this phenomenon hard to square. But on the other hand, I and many others have been successfully using a rubber duck as a partner for years - so is an LLM much of a stretch?
The "Dopamine-Driven Development" Warning #
Mo Kargas also coined what might be my favourite phrase from the entire discussion: "dopamine-driven development." He described junior engineers who might accept LLM-generated solutions without understanding them, simply because getting code that "mostly works" triggers that satisfying dopamine hit.
This concern resonated throughout the thread. Several engineers shared horror stories of teams shipping unvetted AI-generated code, or watching colleagues become over-reliant on tools they didn't truly understand. Fernando Jimenez put it bluntly: "The last thing the world needs is a bunch of randoms using 'vibes coding', installing, downloading, using, and creating packages from god knows where without any real vetting or oversight."
The Learning Accelerator Argument #
Another counter-argument to my original skepticism came from engineers who use LLMs primarily for learning and understanding. Daniel Methner claimed he learns more about technologies and libraries with Gemini in a day than he would reading documentation in a week. Alexander Schrab noted how AI can speed up navigation of large, unfamiliar codebases by pointing you in the right direction.
Again, I find this hard to square, since LLMs grasp of up to date documentation is regularly quite poor in my experience, but I'm also trying to acknowledge that many are finding it more useful than real docs and don't want to dismiss the validity of that experience.
Patrick Sheehan offered a particularly thoughtful perspective: "LLMs aren't a way to skip the hard part. They're a way to go deeper, faster. They meet you where you are, no matter how basic the question."
This suggests that rather than replacing understanding, well-used AI tools might actually accelerate it for some people. The key phrase is "well-used". These engineers aren't accepting AI output blindly; they're using it as a research tool to build their comprehension more quickly.
The Craft vs Efficiency Tension #
One of the most poignant moments came from Charles B., a retired engineer who asked: "Do any of you actually enjoy coding? From all the stuff I read here on LinkedIn, it seems as if coding is increasingly being viewed as a necessary evil to be done as quickly as possible." I have to say this really resonates with me, as someone that transitioned into a programming career because of how much I enjoyed the actual programming aspects. I don't get to be on the tools as much any more, as an Engineering Manager, so I am a bit guilty of looking at code as a means to an end now - shipping features in a competitive market.
This touched on something deeper than productivity metrics. Steve Chandler echoed this concern, pushing back against what he saw as "efficiency at all costs" thinking that ignores the human elements of software development.
There's a genuine tension here between viewing code as craft - something to be savoured and perfected - and viewing it as a means to an end that should be optimised for speed. Both perspectives have merit, and I suspect the answer varies by context, experience level, and individual temperament.
The "Intern, Not Consultant" Distinction #
Nathan Gasser provided his own framework for thinking about appropriate AI use: "I use AI as an intern, not a consultant. You give an intern something you know how to do but don't have time. You give a consultant something you don't know how to do and need their expertise."
This distinction is interesting. Using AI for tasks within your competency whilst maintaining oversight makes sense. Delegating unfamiliar or complex problems to AI without understanding the domain is where trouble begins. My original post was very much a reaction to all the hype I see about the other position, i.e. assuming LLMs can just act as a consultant delivering code to a particular spec. (As it happens, I have some opinions about human consultants, too...but I might save that the next time I want half of LinkedIn to descend on me.)
My Evolving Perspective #
Reading through these responses, I'm struck by how many thoughtful engineers are finding genuine value in AI tooling whilst maintaining appropriate skepticism. The most compelling use cases weren't about raw code generation, but about learning acceleration, idea exploration, and eliminating tedious work that adds little value.
I remain skeptical of "vibe coding". But I'm increasingly convinced that dismissing all AI tooling would be as simplistic as embracing it uncritically. The engineers getting the most value seem to be those who understand both the capabilities and limitations of these tools, and who maintain human judgement at the centre of their development process.
The real question isn't whether AI makes us faster. Maybe the question is whether we're using that speed to do better work or simply to do more work. The engineers in this discussion who impressed me most were those using AI to deepen their understanding, explore more possibilities, and focus on higher-value problems. They're not trying to replace thinking with automation.
That's a distinction worth preserving as these tools continue to evolve.