It has been wild to realize I've now seen one full technology cycle of thin client to thick client to thin client again. Maybe PHP this time around will be able to be more robust with the lessons learned.
HTMX has given me so much joy. I love Django and was there near the beginning. For a while I thought I had to switch to something like FastAPI and Vue etc to make relevant web apps and sites, but with Django recently adding async, Django Ninja, and HTMX, I'd reach for Django now for almost anything besides a few specific use cases.
So many problems I've run into with newer tools feel like they were already solved years ago if you can SSR.
Not that the newer tools don't have their place and they have plenty of good ideas, but it's been fun to see Django stay relevant and for more of the included batteries to be useful again (forms, templates, etc).
I've been finding a ton of success lately with speech to text as the user prompt, and then using https://continue.dev in VSCode, or Aider, to supply context from files from my projects and having those tools run the inference.
I'm trying to figure out how to build a "Context Management System" (as compared to a Content Management System) for all of my prompts. I completely agree with the premise of this article, if you aren't managing your context, you are losing all of the context you create every time you create a new conversation. I want to collect all of the reusable blocks from every conversation I have, as well as from my research and reading around the internet. Something like a mashup of Obsidian with some custom Python scripts.
The ideal inner loop I'm envisioning is to create a "Project" document that uses Jinja templating to allow transclusion of a bunch of other context objects like code files, documentation, articles, and then also my own other prompt fragments, and then to compose them in a master document that I can "compile" into a "superprompt" that has the precise context that I want for every prompt.
Since with the chat interfaces they are always already just sending the entire previous conversation message history anyway, I don't even really want to use a chat style interface as much as just "one shotting" the next step in development.
It's almost a turn based game: I'll fiddle with the code and the prompts, and then run "end turn" and now it is the llm's turn. On the llm's turn, it compiles the prompt and runs inference and outputs the changes. With Aider it can actually apply those changes itself. I'll then review the code using diffs and make changes and then that's a full turn of the game of AI-assisted code.
I love that I can just brain dump into speech to text, and llms don't really care that much about grammar and syntax. I can curate fragments of documentation and specifications for features, and then just kind of rant and rave about what I want for a while, and then paste that into the chat and with my current LLM of choice being Claude, it seems to work really quite well.
My Django work feels like it's been supercharged with just this workflow, and my context management engine isn't even really that polished.
If you aren't getting high quality output from llms, definitely consider how you are supplying context.
Thank you so much for writing this. I have recently discovered that I have both autism and ADHD, and increasingly it feels like this mind style has a steep counterintuitive learning curve but also very high skill ceiling.
The video game analogy rings very true for me. It helps me a lot to read articles like yours because it gives me new ideas to try. I fully agree with your premise and I've been experimenting with indeed card based systems but have been frustrated by, as you noted, how having to repeatedly make the cards every day basically means I'll probably stop doing it. The receipt printer is a fantastic idea. Making mental only systems physical seems to invoke the spatial parts of the brain. I've been trying to find good ways to synchronize my mental, digital, and physical information. I'd love to read more of your ideas if you publish anything else on your mailing list. Cheers
First of all, condolences to your friend and cool of you to look into this.
Back in the day I switched to Dvorak and came across the "one handed Dvorak layout. This may be what you are referring to. I haven't tried it much but those layouts could be a temporary solution. I found Qwerty to be a lot easier to type one handed straight up because Dvorak tries to alternate hands between keys.
I recently discovered Talon, an open source app for voice control of basically everything on a machine that requires no typing at all. I saw some people are using it even if they can use their hands, as a power tool. It appears to be fully Python scriptable and also gives you some nice speech to text abilities too.
It allows you to specify a bunch of keywords for typing symbols and it looks like some people can do full coding quite quickly.
Perhaps this injury could be an opportunity to try something like this and become more powerful than before?
Talon is not open source as far as I know. It's freeware with Patreon early access and support. The community plugins cover a wide range of applications and are easy to modify. I also found their Slack good for discussing accessibility options like gaze tracking. It looks like development has slowed significantly but the developer recently rewrote the core in Rust.