Hacker Newsnew | past | comments | ask | show | jobs | submit | tvshtr's favoriteslogin

> First, I just upgraded to 2.5 Gbps internet and I don’t want to route all my traffic through a VPN and take the speed hit. I have this bandwidth for a reason

You don't have to. You create a container which runs openvpn to connect to your vpn provider, and also hosts an ssh daemon. The ssh daemon receives incoming SOCKS5 connections from a firefox portable browser, which has been configured to use the proxy (your Docker openvpn-container) for browsing and DNS resolution, and pipes it through the VPN tunnel.

So you have that one browser just to surf imgur. if that's your thing. And you could also use Firefox on Android (maybe also iOS) with those proxy settings (a secondary Firefox browser, like the beta version).

So you get very high control about what you are using the VPN for, you don't just pipe your entire OS's network traffic through the VPN.


If a legit user accesses the link through an <img> tag, the browser will send some telling headers. Accept: image/..., Sec-Fetch-Dest: image, etc.

You can also ignore requests with cross-origin referrers. Most LLM crawlers set the Referer header to a URL in the same origin. Any other origin should be treated as an attempted CSRF.

These refinements will probably go a long way toward reducing unintended side effects.


Check OpenObserve https://github.com/openobserve/openobserve. It precisely was built to solve the challenges around grafana nd elastic. This is not a stack that you will need to weave together, just a single binary/container that would suffice for most users' needs - logs, metrics, traces, dashboards, alerts.

Disclosure: I am a maintainer of OpenObserve


Thank you for that. I absolutely love that this uses tantivy.

I was previously leaning torwards VictoriaMetrics and VictoriaTraces (I will need both) but I think that OpenObserve is even simpler. Later I found Gigapipe/qryn https://github.com/metrico/gigapipe

Does OpenObserve ships something to view traces and metrics? (it appears that Gigapipe does). Or am I supposed to just use Grafana? I want to cut down on moving pieces.


NOLF is actually source-available [0][1][2], and it has been since not that long after its original release.

There's also a community-driven project [3] keeping it playable on modern hardware - however, it hasn't seen any activity in several years.

If you haven't played or heard of NOLF before, I highly encourage checking it out. It's a fantastic title, even after all these years.

0: https://web.archive.org/web/20020217233624/http://pc.ign.com...

1: https://web.archive.org/web/20010720053220/http://noonelives...

2: https://github.com/osgcc/no-one-lives-forever

3: https://github.com/haekb/nolf1-modernizer


    5 million Rust LOC 
    One potential memory safety vulnerability found 
    Rust is 0.2 vuln per 1 MLOC.

    Compared to 
    C and C++ : 1,000 memory safety vulnerabilities per MLOC. 

Key take.

This looks to be one of the most complete Rust UI creates (in terms of available widgets/components), but unfortunately has almost no usage (yet). I do see their docs are coming along now. Another very complete one is fyrox-ui used by the fyrox game engine: https://crates.io/crates/fyrox-ui. Again, not really used/known outside of fyrox.

The Rust UI scene is maturing, but the most popular options (iced, egui, dioxus, slint, etc.) aren't even the most complete component-wise atm as far as I can tell.

UPDATE: This honestly looks incredible and makes huge strides in the Rust UI landscape. You can run an impressive widget gallery app here showing all their components:

https://github.com/longbridge/gpui-component/tree/main/crate...

Just "cargo run --release"


AFAIK Bemotrizinol is the only(?) chemical sunscreen active which is shown to not be an endocrine disruptor (this chemical https://pubmed.ncbi.nlm.nih.gov/36738872/)

It's hard/impossible to find in US formulations but in AU and EU some higher end brands use it. I like the La Roche Posay Anthelios series of sunscreen - I believe they all use Bemotrizinol as the active but I am 100% sure this one does: https://www.laroche-posay.com.au/sun-protection/face-sunscre... - Note that the formulation for the specific product is different in different regions, this is the Australian version.


There are a number of sites that aggregate some things, here is one

https://moderncss.dev/topics/

Recent survey of what people use to learn CSS:

https://2025.stateofcss.com/en-US/resources/

CSS Tricks article on this:

https://css-tricks.com/how-to-keep-up-with-new-css-features/


Oh wow, I had no idea they made a miniseries about ART+COM. I had heard of that company many times as a connection point of people associated with the Chaos Computer Club, but never looked into it.

Doesn't Rust do this? `let` is always on the stack. If you want to allocate on the heap then you need a Box. So `let foo = Box::new(MyFoo::default ())` creates a Box on the stack that points to a MyFoo on the heap. So MyFoo is a stack type and Box<MyFoo> is a heap type. Or do you think there is value in defining MyFooStack and MyFooHeap separately to support both use cases?

Dunno why you've been voted down; you're totally right. The method you mention is called token/cursor/keyset-based pagination.

Before getting a CAC scan, I'd probably do these tests first:

* ApoB - about 20% of people with normal cholesterol results will have abnormal ApoB, and be at risk of heart disease.

* Lp(a) - the strongest hereditary risk factor for heart disease.

* hs-CRP - inflammation roughly doubles your risk of heart disease

* HbA1c - insulin resistance is a risk factor for just about everything.

* eGFR - estimates the volume of liquid your kidneys can filter, and is an input to the latest heart disease risk models (PREVENT).

Easy to order online: https://www.empirical.health/product/comprehensive-health-pa...

CAC is great for detecting calcified plaque in your coronary arteries. But before you have calcified plaque, the above risk factors tell you about the buildup of soft plaque. And 4 out of 5 of them are modifiable through lifestyle, exercise, and medication.


Everything reinforces my choice of Aider+Openrouter+code. I use models 20x cheaper than Sonnet, completely pay as you go, nobody is playing games with models, their LLM bills and my context to turn a profit. I generate Python scripts with Mongo and Cloudwatch queries instead of getting angry at the mystical sparkly button UX on their apps.

Almost MacOS-tier font rendering, for free:

    FREETYPE_PROPERTIES="cff:no-stem-darkening=0 autofitter:no-stem-darkening=0"
Probably only good in high DPI monitors though.

It's a brutally simple combination of `@kevingimbel/eleventy-plugin-mermaid` with `svg-pan-zoom` :)

If you found this interesting, I highly recommend reading "The Righteous Mind" by Jonathan Haidt. It's deeply impacted how I think of morality and politics from a societal and psychological point of view.

Some ideas in the book:

- Humans are tribal, validation-seeking animals. We make emotional snap judgments first and gather reasons to support those snap judgments second.

- The reason the political right is so cohesive (vs the left) is because they have a very consistent and shared understanding and definitions of what Haidt calls the 5 "moral taste receptors" - care, fairness, loyalty, authority, sanctity. Whereas the left trades off that cohesive understanding with diversity.


At my work, here is a typical breakdown of time spent by work areas for a software engineer. Which of these areas can be sped up by using agentic coding?

05%: Making code changes

10%: Running build pipelines

20%: Learning about changed process and people via zoom calls, teams chat and emails

15%: Raising incident tickets for issues outside of my control

20%: Submitting forms, attending reviews and chasing approvals

20%: Reaching out to people for dependencies, following up

10%: Finding and reading up some obscure and conflicting internal wiki page, which is likely to be outdated


This is similar to an old HCI design technique called Wizard of Oz by the way, where a human operator pretends to be the app that doesn’t exist yet. It’s great for discovering new features.

https://en.m.wikipedia.org/wiki/Wizard_of_Oz_experiment


This (ragebaity/ai?) post kind of mixes things up. Kubernetes is fine I think but almost everything around it and the whole consulting business is the problem. You can build a single binary, use a single layer oci container and run it with a single config map with memory quota on 30 machines just fine.

Take a look at the early borg papers what problem does it solves. Helm is just insane but you can use jsonnet that is modelled after Google's internal system.

Only use the minimal subset and have an application that is actually build to work fine in that subset.


Just set this on my MiniPC running Debian which runs Jellyfin.

    sudo nano /etc/default/grub
Look for GRUB_CMDLINE_LINUX_DEFAULT and add: i915.mitigations=off

    GRUB_CMDLINE_LINUX_DEFAULT="quiet i915.mitigations=off"
Then:

    sudo update-grub
    sudo reboot
To verify:

    cat /proc/cmdline

> It's i915.mitigations

Since you're doing the research, you tell us. Is NEO_DISABLE_MITIGATIONS (the flag mentioned in TFA) related to i915.mitigations, and if so, how?

TFA mentions that Intel ships prebuilt driver packages with this NEO_... flag set, and that Canonical and Intel programmers talked at some length about the flag.


I had to ask Gemini CLI to remind myself ;) but you can add this into settings.json:

{ "excludeTools": ["run_shell_command", "write_file"] }

but if you ask Gemini CLI to do this it'll guide you!


Have tried everything under the sun at the moment— broadly just two winners (and both have become my daily drivers for different use cases):

1. Claude Code with Opus 4

2. Cursor with Opus 4 or Gemini 2.5 Pro (Windsurf used to be an option but Anthropic has now cut them out)

3. (Coming up; still playing around) Claude Code’s GitHub Action


Yes, that's the LGTM(Loki, Grafana, Tempo, and Mimir) stack.

First, the main issue with this stack is maintenance: managing multiple storage clusters increases complexity and resource consumption. Consolidating resources can improve utilization.

Second, differences in APIs (such as query languages) and data models across these systems increase adoption costs for monitoring applications. While Grafana manages these differences, custom applications do not.


In no particular order:

* experiment with multiple models, preferably free high quality models like Gemini 2.5. Make sure you're using the right model, usually NOT one of the "mini" varieties even if its marketed for coding.

* experiment with different ways of delivering necessary context. I use repomix to compile a codebase to a text file and upload that file. I've found more integrated tooling like cursor, aider, or copilot, are less effective then dumping a text file into the prompt

* use multi-step workflows like the one described [1] to allow the llm to ask you questions to better understand the task

* similarly use a back-and-forth one-question-at-a-time conversation to have the llm draft the prompt for you

* for this prompt I would focus less on specifying 10 results and more about uploading all necessary modules (like with repomix) and then verifying all 10 were completed. Sometimes the act of over specifying results can corrupt the answer.

[1]: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/

I'm a pretty vocal AI-hater, partly because I use it day to day and am more familiar with its shortfalls - and I hate the naive zealotry so many pro-AI people bring to AI discussions. BUTTT we can also be a bit more scientific in our assessments before discarding LLMs - or else we become just like those naive pro-AI-everything zealots.


It's a real UI - the code for that is here: https://www.val.town/x/geoffreylitt/stevensDemo/code/dashboa...

A single cloudflare durable object (sqlite db + serverless compute + cron triggers) would be enough to run this project. DOs have been added to CFs free tier recently - you could probably run a couple hundred (maybe thousands) instances of Stevens without paying a cent, aside from Claude costs ofc

> Won’t he eventually ran out of context window?

The "memories" table has a date column which is used to record the data when the information is relevant. The prompt can then be fed just information for today and the next few days - which will always be tiny.

It's possible to save "memories" that are always included in the prompt, but even those will add up to not a lot of tokens over time.

> Won’t this be expensive when using hosted solutions?

You may be under-estimating how absurdly cheap hosted LLMs are these days. Most prompts against most models cost a fraction of a single cent, even for tens of thousands of tokens. Play around with my LLM pricing calculator for an illustration of that: https://tools.simonwillison.net/llm-prices

> If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?

Geoffrey's design is so simple it doesn't even need search - all it does is dump in context that's been stamped with a date, and there are so few tokens there's no need for FTS or vector search. If you wanted to build something more sophisticated you could absolutely use those. SQLite has surprisingly capable FTS built in and there are extensions like https://github.com/asg017/sqlite-vec for doing things with vectors.


I believe you can still use Gemini 2.5 Pro for free via https://aistudio.google.com and their gemini-2.5-pro-exp-03-25 model ID through their API.

The free tier is "used to improve our products", the paid tier is not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: