On LLMs
AI tools are quickly changing many facets of our industry and our society, and they won’t be disappearing.
I’ve recently been experimenting with LLMs more for programming tasks, both at work and in my personal time. My relationship with these tools will certainly evolve over time, so I thought I’d log some current early thoughts of and impressions about this transformative technology.
On using it
My goal is to be an effective engineer both with and without AI. I’m happy to integrate these new tools where they are effective, but I want to avoid needing them to be productive. (On this point, I wonder how future generations of engineers will fare.)
Engineering software is and will remain a craft, and like any craft, one’s skill can only be honed by an ongoing combination of study, practice, and application; and overreliance on AI risks the atrophy of one’s own abilities. As such, I feel it’s vital to find a balance between leveraging LLMs and on the application of one’s own skills. LLMs have the potential to be both force multipliers and crutches for engineers, depending on how they’re used.
Where they help
I have definitely found LLMs useful for some tasks:
Code brush-ups. When I’ve written code that works but that I suspect could be written better, I’ll have an LLM provide suggestions for improvement. Sometimes their responses are entirely unhelpful and I disregard them; other times, they provide some very good improvements or ideas for improvement. (Of course, I always double-check their work. I also properly investigate new functions or concepts, etc., they present.) Doing this is particularly helpful when learning new programming languages or especially new paradigms (functional programming for me of late, of course).
Converting code between languages. Just recently, as an experiment, I used LLMs to help rewrite a C# application to F#. I’ll write a bit more in a future post, but the experiment was very successful, reducing the time necessary by an estimated 80% to 90%, though a lot of manual clean-up was necessary too. (It would be fool-hardy to do this without carefully reviewing the generated code.)
Reference. While I aim to use actual references when I can, I’ve found LLMs make great additional reference tools—for strictly technical matters, at least. They are helpful more often than not, providing more targeted summaries and explanations than regular search results do much of the time. Also, LLMs have been massively helpful for breaking down some trickier concepts of functional programming and providing custom code samples for them too. For this, LLMs have become rather indispensable.
Test and test data generation. At least a couple of times, LLMs have saved me considerable trouble by quickly generating unit tests and sample data that would have taken a lot longer to put together manually. Again, all generated code is carefully checked and brushed up.
On this blog
I’ve seen internet users bemoan the increase of AI-written (or, at least, AI-augmented) content online. I share their consternation, but I think we all know that problem’s going to get much worse…
A small aside: Somewhat amusingly and annoyingly, it appears that usage of em dashes (“—”) is now associated with AI-generated content by many people. How annoying since, as a former grammar dork, I’ve been using em dashes since I was a kid. (I haven’t used Windows in years now, but I still recall the Alt-code for em dashes in Windows (0151), and macOS makes it super simple to type them, so I often do because I’m like that.)
So, to clarify: I will never post any AI-generated or AI-augmented articles on this blog. Any dashes (em, en, or other!), typos, and other creative lexical errors are my own — for better or for worse. 😅 If there’s ever a need to insert some kind of AI-generated content, I’ll label as such.