Hacker Newsnew | past | comments | ask | show | jobs | submit | kachapopopow's commentslogin

ymmv, but I think all of this is too much and you generally don't need to think about how to use an AI properly since screaming at it usually works just as well as very fine tuned instructions.

you don't need claude code, gemini-cli or codex I've been doing it raw as a (recent) lazyvim user with a proprietary agent with 3 tools: git, ask and ripgrep and currently gemini 3 is by far the best for me even without all these tricks.

gemini 3 has a very high token density and a significantly larger context than any model that is actually usable, every 'agent' I start shoves 5 things into the context:

- most basic instructions such as: generate git format diff only when editing files and use the git tool to merge it (simplified, it's more structured and deeper than this)

- tree command that respects git ignore

- $(ask "summarize $(git diff)")

- $(ask "compact the readme $(cat README.MD"))

- (ripgrep tools, mcp details, etc)

when the context is too bloated I just tell it to write important new details to README.MD and then start a new agent

https://github.com/kagisearch/ask


I'm doing something very similar but even simpler and Gemini 3 is absolutely crushing it. I tried to do this with other models in the past, but it never really felt productive.

I don't even generate diffs, just full files (though I try and keep them small) and my success rate is probably close to 80% one-shotting very complex coding tasks that would take me days.


I love how 3 years/second seems slow at this time scale.

nat gateways not being free is criminal.

I think the bigger question is: why not open-source it? At bare minimum provide the debug symbols for it (even chrome provides them!).

try cosmic desktop since it was made to be similar to gnome - it's maintained by system76 and is shaping up to be one of the most polished desktops out there, gnome has been feeling like it's going downwards for a while. I can't comment too much tho since I am too used to KDE at the moment and tiling support is just not there yet compared to KWin.

11 more and it will run on win95 again.

slightly better at react and spacial logic than gemini 3 pro, but slower and way more expensive.

All javascript based anti-fingerprinting is detectable and is also a major source of uniqueness!

Sure but if you are always unique for every website then you can’t be tracked overtime.

They meant a signal of uniqueness for your setup that could still assist with tracking, not being unique for every site.

pick your poison:

- you are vulnerable for 7 days because of a now public update

- you are vulnerable for x (hours/days) because of a supply chain attack

I think the answer is rather simple: subscribe to a vulnerability feed, evaluate & update. The amount of times automatic updates are necessary is near zero as someone who has ran libraries that are at times 5 to 6 years out of date exposed to the internet without a single event of compromise and it's not like these were random services, they were viewed by hundreds of thousands of unique addresses. There was only 3 times in the last 4 years where I had to perform updates due to a publically exposed service where these vulnerabilities affected me.

Okay, the never being compromised part is a lie because of php, it's always PHP (monero-miner I am sure everyone is familiar with). The solution for that was to stop using PHP and assosiated software.

Another one I had problems with was CveLab (GitLab if you couldn't tell), there has been so many critical updates pointing to highly exploitable CVE's that I had decided to simply migrate off it.

In conclusion avoiding bad software is just as important as updates from my experience lowering the need for quick and automated actions.


well if we just apply the bell curve here, on average children will be pretty average (shocker) so those should be left on their own and discover their own niche while bad performers should get extra attention so they can keep up with the rest and with the gifted (if they actually want to) given the opportunity to explore higher level subjects.

so in the end we give attention to gifted and the struggling since there's very little you can do to children who are already decent and are capable of keeping up at most they lack discipline or motivation.


Gifted and average students who aren’t given attention become poor performers quickly.

Gifted students especially.

Students who learn ahead don’t want to be told: “ok you have the material, so I’ll ignore you for a bit.” They want more. They want their questions answered, even if the questions aren’t part of the lesson plan.

Nobody wants to be told “we aren’t studying that today.”

You really can’t starve the rest of the class to cater to poor performers.


teaching independent learning is also a very valuable skill, most average people can do it especially the gifted ones as long as they are given the resources to do so and general skills taught in the first 8 grades since realistically I caught up by 10th grade while being a very poor performer in the first 8 (mostly due to lack of effort and generally being very stupid as a kid).

also there is very little excuse now with how advanced AI is getting at being able to explain subjects, it's all coming down to motivation, environment and the ability to use these tools in the first place - children struggle to use computers since we see it as a given, but to someone who didn't grow up with the incremental advancements it can seem very overwhelming.


AI is very bad at developing curricular materials.

that's mostly a context limitation with hallucinations sprinkled in, but they are currently good at understanding existing problems and how to solve them as long as they're not college-level or above since at that point they tend to fall apart due to complex interaction between different subjects and number percision overload esp when it comes to matricies.

I suspect it’s not just a context limit.

I suspect, but can’t prove, that model trainers deliberately steer models away from creating tests and worksheets.

The reason being that when a human asks a written question: “who president of the US in 1962?” it’s very likely to be a part of a worksheet.

Novels don’t contain many questions like that, nor do non fiction. They’re mostly paragraphs. Most text is mostly paragraphs.

Naked question usually means worksheet. AIs know this, so their if you ask it a question like: “who president in 1962?” It responds which the most likely next sentence, a related question: “How was the Cuban Missile resolved?”

So there’s a huge discrepancy between the next most likely sentence based on training, and what a user likely wants. If I ask, “Who was president in 1962?” I don’t want another question, nor does anyone else.

But that’s what the training data provides.

So model trainers have to bias it away from worksheets. This isn’t hard to do, and is a normal part of model training.

I’ve personally seen this behavior in poorly parameterized or trained models. They love answering questions by assuming they are in worksheets. It’s a huge pain.

Interestingly it never happens with top-line models like ChatGPT.

Carefully hyperparameterizatiin helps, but I think you’ll have to adjust the weights too. But that likely makes it harder to make actual worksheets.

This is just a guess. But I suspect models are weighted to discount pedagogical materials because of how different they are from what the users often expect.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: