> Tools like SourceFinder must be paired with education — teaching people how to trace information themselves, to ask: Where did this come from? Who benefits if I believe it?
These are very important and relevant questions to ask oneself when you read about anything, but we also keep in mind that even those question can be misused and they can drive you to conspiracy theories.
We've been through many technological revolutions, in computing alone, through the past 50 years. The rate of progress of LLMs and AI in general over the past 2 years alone makes me think that this may be unwarranted worry and akin to premature optimization. Also, it seems to be rooted in a slightly out of date, human understanding of the tech/complexity debt problem. I don't really buy it. Yes complexity will increase as a result of LLM use. Yes eventually code will be hard to understand. That's a given, but there's no turning back. Let that sink in: AI will never be as limited as it is today. It can only get better. We will never go back to a pre-LLM world, unless we obliterate all technology by some catastrophy. Today we can already grok nearly any codebase of any complexity, get models to write fantastic documentation and explain the finer points to nearly anybody. Next year we might not even need to generate any docs, the model built in the codebase will answer any question about it, and will semi-autonomously conduct feature upgrades or more.
Staying realistic, we can say with some confidence that within the next 6-12 months alone, there are good reasons to believe that local, open source models will equate their bigger cloud cousins in coding ability, or get very close. Within the next year or two, we will quite probably see GPT6 and Sonnet 5.0 come out, dwarfing all the models that came before. With this, there is a high probability that any comprehension or technical debt accumulated over the past year or more will be rendered completely irrelevant.
The benefits given by any development made until then, even sloppy, should more than make up for the downside caused by tech debt or any kind of overly high complexity problem. Even if I'm dead wrong, and we hit a ceiling to LLM's ability to grok huge/complex codebases, it is unlikely to appear within the next few months. Additionally, behind closed doors the progress made is nothing short of astounding. Recent research at Stanford might quite simply change all of these naysayers' mind.
Racket has a very nice built-in debugger in its DrRacket editor, with thread visuals and all. Too bad nobody uses DrRacket, or Racket anymore. Admittedly, even with the best debugger, finding the cause of runtime errors has always been a pain. Hence everybody's moving towards statically compiled, strongly typed languages.
Agreed, but in the case of the lie detector, it seems it's a matter of interpretation. In the case of LLMs, what is it? Is it a matter of saying "It's a next-word calculator that uses stats, matrices and vectors to predict output" instead of "Reasoning simulation made using a neural network"? Is there a better name? I'd say it's "A static neural network that outputs a stream of words after having consumed textual input, and that can be used to simulate, with a high level of accuracy, the internal monologue of a person who would be thinking about and reasoning on the input". Whatever it is, it's not reasoning, but it's not a parrot either.
I personally use it for a lot of SwiftUI work. I parallelize it across at least 3 projects at once. I use only the largest models on highest thinking modes. I give instruction on implementation and provide reference implementations.
I also use it for adding features/feature polish that address user pain points but that I can't prioritize for my own manual work just yet. There are a variety of user requests which sometimes LLMs are able to get done very suddenly when I give it a shot quickly. For these tasks, it's ok to abandon & defer them if the LLM spins its wheels.
I made my own. I needed to have a calendar that showed every todo item per day, and a text editor to edit the tasks just like in a todo.txt. Used it all day every day for over 15 years. I still have it installed on nearly all my Win systems, just because it opens instantly, has priority and colors. I also used it to produce reports for work, so I eventually added export options for HTML to paste directly into an email.
reply