Hacker Newsnew | past | comments | ask | show | jobs | submit | alfonsodev's commentslogin

I use llm from command line too, time to time, is just easier to do

llm 'output a .gitignore file for typical python project that I can pipe into the actual file ' > .gitignore


How do you handle interaction ?

No interaction built in for this kind of simplified use-case, it's just like one of the old "template engines" of the old day, just in JSX/TSX. It's actually much better than expected, I used to dislike that all the old templates had something "off" for me; either they invented their own syntax for logic that you needed to learn besides normal JS (think Handlebars, Pug, etc), or they were in JS-like and with an odd HTML syntax that made sharing between plain HTML and whichever language very hard (think Pug/Jade).

With JSX templating, it's a subset of React, so you can directly share "up"t, and sharing "down" is very easy as well (just removing interaction), since both use the same syntax.


I wrote something like this too. If i need interaction, i did something with the onclicks so it just sends the function definition to the client and calls that. Its not as powerful as react but you can do basic stuff. Its good if your site is mostly static.

I was strongly thinking about doing that, but I think I prefer (for now) explicitly not having events, than having events that work kinda similar but not the same. Did you end up publishing it? Would love to have a look!

I overlooked Astro for a long time, I didn't really get it, and my journey back to it went something like this:

- 1 Getting burned out by Nextjs slowness in a complex production project that shouldn't be that complex or slow on the dev side, (this was 2022 approx)

- 2 Taking a break from React

- 3 Moving back to classic server side rendering with python and Go and dealing now with template engines. Hyped with HTMX and loving it, but my conclusion after so many years of react was that template partials don't feel right to me and templates engines are somewhat not maintained and evolved as used to be. I found my self not feeling naturally inclined to reach for the htmx way and just let the coding agent do it the way they wanted AND stating to notice again the burn out.

- 4 Looking with some envy to co-workers using shadcn how fast they are getting things done and how good they look.

- 5 Wondering would be a way to use JSX with HTMX server side, I miss components, I don't want partial templates.

And then I found Astro, ahhh now I get it, Astro prioritizes generation over run time, and that unlocks a lot of gradual complexity where you can choose how to mix things ( islands ) you get something way more interesting than a template engine, and it uses JSX so you can benefit from React ecosystem.

This where I am now, but yet I have to complete a side project with it to know if I fully get it and love it.

So far seems to me is the answer I was looking for.


This is what doesn't get discussed enough around htmx, in my opinion. So much of the difficult steps are left for the templating system, and template systems aren't great in general. You need to track a lot of identifiers for htmx to work properly, and your template and view logic needs to make that make sense. For the templating systems I've seen, that's not so simple to do.

1000% this. I actually am using htmx at ${JOB} and this is essentially the only downside to htmx. I want to know which template partial is getting swapped. My IDE doesn't know. I need to track countless html ids to know what will be swapped where... how? It hasn't been a big deal because I alone write the frontend code so I have all my hacks to navigate and my intimate knowledge of the code, but if we need more devs on the frontend, or if the frontend drastically grows feature wise, i will need to tackle this issue post haste. I think template partials could help, but then we also would end up with giant template files and that would also be annoying.

How does it feel to be one of 3 HTMX related jobs in the universe (according to the comments in this thread)? heh.

Last time I dabbled in a front-end I tried out Astro but felt like it just added another layer of complexity without much gain when all my components were just wrapping React in different ways. I went with react router instead.

I can see the value of the "islands" concept when you have a huge front-end that's grown over generations of people working on it.

For my constrained front-end debugging Astro errors on top of React errors on top of whatever all the turtles down felt a like a step too far.

Am I in my Rust centered back-end driven brain missing something?


> 5 Wondering would be a way to use JSX with HTMX server side, I miss components, I don't want partial templates.

I'm playing with JSX, Hono, and Bun right now to do just that. It's early but will see how it goes.


Astro is great. If you are familiar with React, you can pick it up pretty much instantly. The design is simple enough to extend when needed as well for custom things.

Totally agree, how I see it, it's related to taking time to sharpen your axe.

Having a defined flow that gives you quick feedback quick and doesn't get in the way.

I you are writing, then you'd be using an app that you can quickly do what you want, e.g shortcuts for bold, vim/emacs motions, that "things-not-getting-in-the-way" state is what leads to flow state, in my opinion.

Muscle memory is action for free, then you can focus on thinking deeper.

Same happens with coding, although is more complex and can take time to land in a workflow with tools that allow you to move quick, I'm talking about, logs, debugger (if needed), hot reloading of the website, unit test that run fast, knowing who to ask or where to go for finding references, good documentation, good database client, having prepared shortcuts to everything ... and so on.

I think it would be could if people would share their flow-tools with different tech stacks, could benefit a lot of us that have some % of this done, but not 100% there yet.


The energy threshold for adding a new unit test to the suite and a new row to the docs are vital for it to be done.

If I need to install pandoc to test compile a doc change before i submit it for code review with 3 other maintainers, id rather keep my note or useful screenshot to myself.

If i need to create a c binding of my function so that pytest can run it through 50 rows of cryptic CMake, I'd rather do happy testing locally and submit it as a "trust me bro".

Good and fast international tooling matters massively for good software. And it all comes back to speed and iteration loop.

On top of that, slow meticulous work can then be done. 100% test coverage, detailed uml diagrams describing the system, and functional safety risk analysis matrix documents.

So speed and slowness supplement in different levels of analysis.


Awesome! I like when imagination fills the gaps of technology, maybe because I played on old computers like spectrum, we had few pixels and had to imagine the rest.

The ordering could’ve been “solved” with a WhatsApp message, ( or shouting ? :D ) but that would have been so boring!

This much better life UX !

This app is a reminder of being playful and imaginative in life can bring joy, congrats!


is it true that it's a tradeoff ? the "more precocial" the less flexibility to learn new things ? on the contrary knowing less equals less assumptions, which needs more flexibility in exchange.

Would be true that what is precocial in us is the ability mimic and abstract specific patterns into general rules ?


It must be a tradeoff. I don't have any proof, but my thinking is that we pay an extraordinary price in terms of resources required to keep human babies safe for years before they can keep themselves safe. That is a strong selection pressure on everyone involved. The fact that it still happens means it must somehow be worth it.


Humans are born quite prematurely so that the head fits through the birth canal.


Naturally, I'm a dev. Could it be something to do with limited genetic storage being dedicated to software instead of coding for hardware capabilities? In my limited knowledge, increasing DNA size comes at a maintainance cost(transcription, replication etc), so there's a soft upper bound.


Does it matter ? if it's well defined, each of those would be a node in the graph, or can you elaborate ? Dozens seems not that much, for a graph where a higher level node would be slack, and the agent only loads further if it needs anything related with slack. Or I'm not understanding.


The difference between manipulation and influence is that on the first one you are the only one taking advantage of the situation, and the second one you genuinely believe the other person will end in a better place and if you are wrong no harm is done.

I guess is also about if you care about the other person or you are just pretending, unfortunately in my opinion there is no way to know, because some people are really good at pretending to care, and even supporting you with a hidden score tracking board, basically they are investing.

And then there are people that really care about you and because they know they can't do anything or don't know what to say, they won't reach to you.

I guess we are only left with our instinct and that is something that you learn to calibrate with time.


I've been thinking a lot about this, and I want to build the following experiment, in case anyone is interested:

The experiment is about putting an LLM to play plman[0] with and without prolog help.

plman is a pacman like game for learning prolog, it was written by profesor Francisco J. Gallego from Alicante University to teach logic subject in computer science.

Basically you write solution in prolog for a map, and plman executes it step by step so you can see visually the pacman (plman) moving around the maze eating and avoiding ghost and other traps.

There is an interesting dynamic about finding keys for doors and timing based traps.

There are different levels of complexity, and you can also write easily your maps, since they are just ascii characters in a text file.

I though this was the perfect project to visually explain my coworkers the limit of LLM "reasoning" and what is symbolic reasoning.

So far I hooked ChatGPT API to try to solve scenarios, and it fails even with substancial amount of retries. That's what I was expecting.

The next thing would be to write a mcp tool so that the LLM can navigate the problem by using the tool, but here is where I need guidance.

I'm not sure about the best dynamic to prove the usefulness of prolog in a way that goes beyond what context retrieval or db query could do.

I'm not sure if the LLM should write the prolog solution. I want to avoid to build something trivial like the LLM asking for the steps, already solved, so my intuition is telling me that I need some sort of virtual joystick mcp to hide prolog from the LLM, so the LLM could have access to the current state of the screen, and questions like what would be my position if I move up ? What's the position of the ghost in next move ? where is the door relative to my current position ?

I don't have academic background to design this experiment properly. Would be great if anyone is interested to work together on this, or give me some advice.

Prior work pending on my reading list:

- LoRP: LLM-based Logical Reasoning via Prolog [1]

- A Pipeline of Neural-Symbolic Integration to Enhance Spatial Reasoning in Large Language Models [2]

- [0] https://github.com/Matematicas1UA/plman/blob/master/README.m...

- [1] https://www.sciencedirect.com/science/article/abs/pii/S09507...

- [2] https://arxiv.org/html/2411.18564v1


would it be posible today playing from computer using vlc or similar + plugins ?


It certainly should be possible but idk if its actually implemented; at the very least you should be able to implement it as a filter plugin for ffmpeg.

Some of the more advanced CRT shaders actually attempt to mathematically model how the video gets distorted by the CRT and even the component video. If the effects of converting to film are so well-understood that Pixar can adapt their film for the process then it out to be able to post-process the video in a way that reproduces those artifacts.

I don't think its possible for it ever to be exactly the same since the display technology of a monitor is fundamentally different from a film projector(or a CRT) but it should be possible to get it good enough that its indistinguishable from an photo of the film being displayed on a modern monitor (ie the colors aren't completely different like in the comparisons in the article.

BTW TFA didn't mention this but about 15 years ago they rerendered toy story and toy story 2 for a new theatrical run when those gimmicky 3d glasses were popular. If that's the version thats being distributed today on Disney plus and bluray (IDK but i feel like it probably is) then that could potentially be a more significant factor in ruining the color balance than not having been converted to film.


ShaderGlass probably gets you pretty decently far down the path.

https://mausimus.itch.io/shaderglass


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: