Tcivie Blog

The LLM Dilemma in Software Development

Hey there, fellow coders! I’ve been thinking a lot about how GPT and other LLMs are shaking up our world of software development. It’s exciting stuff, but it’s also got me a bit worried. Let me share my thoughts with you.

Remember the days before these AI assistants came along? We’d spend hours scouring Stack Overflow, poring over official documentation, and reading countless tech blogs. It wasn’t always quick, but man, did we learn a lot along the way. There was something satisfying about digging deep into a problem, finding a solution, and knowing exactly where it came from. Plus, when the boss asked how we figured something out, we could point to a solid source. Those were the days, right?

Fast forward to now

The landscape has changed dramatically. Many of us, myself included, find ourselves turning to ChatGPT or Claude at the first sign of a coding hiccup. We type in our problem, copy-paste the answer, and cross our fingers that it’ll work. It’s quick, it’s easy, and sometimes it feels like magic. But here’s the thing – it’s not always the silver bullet we hope it to be.

Don’t get me wrong, there are times when these AI assistants are absolute lifesavers. They’re great for those little day-to-day coding tasks that we all face. Need to refactor a simple class? LLMs have got your back. Looking for an efficient way to traverse some data? They can whip up an algorithm in seconds. For these kinds of tasks, where we can easily verify the output and understand what’s going on, LLMs can be a real boost to our productivity.

But here’s where it gets tricky

These models have a tendency to be a bit too… optimistic. They’ll often confidently tell you that yes, you can definitely do that thing you’re asking about, even when the reality is a firm “nope.”

◇ Personal Experience: I recently caught one telling me I could commit a specific (defined) number of messages from an MQ server (Not all ones that you pulled). Spoiler alert: you can’t. The official docs confirmed it. This kind of misinformation can send you down a rabbit hole of troubleshooting non-existent solutions, wasting precious time and energy.

And let’s talk about those times when the initial answer doesn’t quite work. You paste in the error message, and the AI apologizes, offers a new solution, and the cycle continues. Before you know it, you’ve spent an hour in this back-and-forth dance, either ending up with something that sort of works or giving up entirely. It’s frustrating, and it’s not always the most efficient use of our time.

◇ Another Tale: Was when I tried to understand how JMS works and try to replicate it in golang because of some legacy code, oh boy I got a lot of wrong answers. And wasted so much time. Eventually, I found the official documentation - It wasn’t easy to understand at first but it paid out in the end. Couple of good learning hours did the trick and I wasn’t only able to solve the issue but now I also understand the details of JMS.

There’s also a deeper issue at play here. These models, as impressive as they are, lack true imagination. They’re essentially remixing existing knowledge, which means they’re not great for brainstorming genuinely new ideas or solving unique, complex problems. When we rely too heavily on them, we risk stunting our own creative problem-solving skills.

So, what’s the takeaway here?

I believe we need to find a balance. Use LLMs for what they’re good at – quick answers to straightforward problems, simple refactoring tasks, and generating boilerplate code.

Tip: I generally have this rule of thumb: If you can make a quick code review of this answer, and find the problems, then go ahead and use this code. Otherwise, it’s a big no.

But when it comes to critical, production-ready code or complex, domain-specific issues, it’s still crucial to rely on official documentation, peer-reviewed solutions, and good old-fashioned problem-solving skills.

Let’s not forget the value of truly understanding our code and where our solutions come from. While LLMs can be powerful tools in our developer toolkit, they shouldn’t replace our ability to research, learn, and critically evaluate solutions. Keep those coding skills sharp, folks, and don’t be afraid to dive deep into documentation when the situation calls for it.


What are your thoughts on this? How are you navigating the world of AI-assisted coding? I’d love to hear your experiences and strategies for using these tools effectively while avoiding their pitfalls.

Oh, and if you’re interested in diving deeper into the limitations of LLMs, I recently read a fascinating article on the topic. Check it out here. It’s definitely worth a read if you want to understand more about the current state of AI in software development.

Happy coding, everyone! And remember, while AI can be a great assistant, your brain is still the best IDE out there.

Tags:
Last Updated: