I dont understand, the agent mode of copilot will search for and be pretty good and filling its own context afaik. I never really feed any of our 100k+ lines legacy codebase explicitly to the LLM.
I really need this in my life. Once upon a time, things were good and our Chromecast with Google TV knew _exactly_ how to turn on our soundbar, set our TV to output sound to said soundbar, control the volume on that soundbar using IR.
Now absolutely nothing of that works. The audio output on the TV is set seemingly semi-randomly depending on content!?. The volume controls just stopped working, and I can not FIND THE SETTINGS in the menus? I suspect it is required to completely redo the remote setup to see those settings, OR as I rather suspect: they broke this shit in purpose to get us to buy a new Google TV Streamer.
I feel very strongly after 20+ years of development that DRY is a good guideline, but I have also seen many, many times that trying to follow it to the letter is actually detrimental and results in too complex solutions.
This is one area I do use LLMs for, write small utils and test code, and very often in languages I very seldom even touch, such as C# and Rust. This to me seems to be the core idea of a tool that is awesome at translating!
Is it just me, or should they have just reverted instead of making _another_ change as a result of the first one?
ALSO, very very weird that they had not caught this seemingly obvious bug in proxy buffer size handling. This points to that the change nr 2, done in "reactive" mode to change nr 1 that broke shit, HAD NOT BEEN TESTED AT ALL! Which is the core reason they should never have deployed that, but rather revert to a known good state, then test BOTH changes combined.
If the reticulum code is worse than the meshtastic one, then it is truly atrocious. Been trying to get a specific board to simply "sleep" its radio using meshtastic, and nobody seems to know WHY it doesnt do it. The code is horrible spaghetti with lots of ifdefs. And nobody seems to know why things are the way they are in the code re: power handling. ChatGPT wrote me a brute force method that works, but its ugly and I dont want to maintain patches.
But it is fairly easy to hack on. I have no idea how to debug things without USB serial connected, though.
Sorry, can't really compare because I've never had to suffer looking at meshtastic source code. Quite tempted at this point to just throw the python implementation of reticulum at Claude and see if a validated port to C++ is possible.
Maybe a bit offtopic and not LoRa, but I've been looking at ESP32 and they include an ESPMesh for the WiFi radio with a promise of about 500 to 1000 meters range from what I read. It isn't the same range as LoRa, but it is "larger" bandwidth and for the price of 3 dollars per unit seems promising on urban areas to connect people. I'm trying it out now.
This pretty much sums up my vibe coding experience as well. I have been doing many small pet tool/util projects at work, and after a few thousand LoCs I am always very detached, and have a hard time to see if the LLM is off track or not. At this point I often try to get it to refactor the code aggressively, and especially finding duplicate things.
reply