LLM Update
I don’t use Large Language Models (LLMs) to write articles on my blog, because I believe it’s important to let my own voice be heard unaltered. To improve and stay authentic, there is no substitute to writing and thinking myself.
I use different commercial and open weight LLMs to test ideas and to critique drafts, though. I also use them in „deep research“ mode, as a semantic search engine to gather and structure source material. I feel that, in these use cases, they improve my productivity considerably.
I use LLMs extensively for coding, but not for „vibe coding“. As of 2026, I still review every line of code generated. This is mainly to prevent technical and cognitive debt stemming from code I do not fully understand. There are also security risks I‘m not willing to take. I find LLMs useful, although not perfect, to generate complex, algorithm-heavy code that would take me much longer to write on my own. I also use LLMs to quickly generate architectural overviews of large code bases, which I find useful, even though I need to carefully check for factual errors. The same applies to using LLMs to review code for defects and security problems, which at this stage can augment, but not replace, human work. As generating code is now basically free, I use coding-LLMs heavily for exploration, e.g. for one-off scripts and interactive visualizations, to learn a new concept, algorithm or to proof an idea.