Hacker Newsnew | past | comments | ask | show | jobs | submit | afarviral's commentslogin

I wasn't planning to renew so soon. Then they dropped this bombshell and I impulse bought another 12 months subscription. So far I'm digging it and looking forward to composing some spicey chord progressions.


I'm interested in computers. What's the point of meadows without computers.


How would you go about making it more secure but still getting to have your cake too? Off the top my head, could you: a) only ingest text that can be OCRd or somehow determine if it is human readable b) make it so text from the web session is isolated from the model with respect to triggering an action. Then it's simply a tradeoff at that point.


I don't believe it's possible to give an LLM full access to your browser in a safe way at this point in time. There will need to be new and novel innovations to make that combination safe.


People directly give their agent root, so I guess it is ok.


Yeah i drive drunk all the time. Havent crashed yet


Is it possible to give your parents access to to your browser in a safe way?


Why do people keep going down this sophistry? Claude is a tool, a piece of technology that you use. Your parents are not. LLMs are not people.


If you think it's sophistry you're missing the point. Let's break it down:

1. Browsers are open ended tools

2. A knowledgeable user can accomplish all sorts of things with a browser

3. Most people can do very impactful things on browsers, like transferring money, buying expensive products, etc.

4. The problem of older people falling for scams and being tricked into taking self-harming actions in browsers is ancient; anyone who was family tech support in the 2000's remembers removing 15+ "helpful toolbars" and likely some scams/fraud that older relatives fell for

5. Claude is a tool that can use a browser

6. Claude is very likely susceptible to both old and new forms of scams / abuse, either the same ones that some people fall for or novel ones based on the tech

7. Anyone who is set up to take impactful actions in their browser (transferring money, buying expensive things) should already by vigilant about who they allow to use their browser with all of their personal context

8. It is reasonable to draw a parallel between tools like Claude and parents, in the sense that neither should be trusted with high-stakes browsing

9. It is also reasonable to take the same precautions -- allow them to use private browsing modes, make sure they don't have admin rights on your desktop, etc.

The fact that one "agent" is code and the other is human is totally immaterial. Allowing any agent to use your personal browsing context is dangerous and precautions should be taken. This shouldn't be surprising. It's certainly not new.


> If you think it's sophistry you're missing the point. Let's break it down:

I'd be happy to respond to something that isn't ChatGPT, thanks.


> Is it possible to give your parents access to to your browser in a safe way?

No.

Give them access to a browser running as a different user with different homedir? Sure, but that is not my browser.

Access to my browser in a private tab? Maybe, but that still isn't my browser. Still a danger though.

Anything that counts as "my browser" is not safe for me to give to someone else (whether parent or spouse or trusted advisor is irrelevant, they're all the same levels of insecurity).


That’s easy. Giving my parents a safe browser to utilize without me is the challenge.


Because there never were safe web browsers in the first place. The internet is fundamentally flawed and programmers are continously having to invent coping mechanisms to the underlying issue. This will never change.


You seem like the guy, who would call car airbags a coping mechanism.


He's off in another thread calling people "weak" and laughing at them for taking pain relievers to help with headaches.


Just because you can never have absolute safety and security doesn't mean that you should be deliberately introduce more vulnerabilities in a system. It doesn't mtif we're talking about operating systems or the browser itself.

We shouldn't be sacrificing every trade-off indiscriminately out of fear of being left behind in the "AI world".


To make it clear, I am fully against these types of AI tools. At least for as long as we did not solve security issues that come with them. We are really good at shipping bullshit nobody asked for without acknowledging security concerns. Most people out there can not operate a computer. A lot of people still click on obvious scam links they've received per email. Humanity is far from being ready for more complexity and more security related issues.


I think Simon has proposed breaking the lethal trifecta by having two LLMs, where the first has access to untrusted data but cannot do any actions, and the second LLM has privileges but only abstract variables from the first LLM not the content. See https://simonwillison.net/2023/Apr/25/dual-llm-pattern/

It is rather similar to your option (b).


Can't the attacker then jailbreak the first LLM to generate jailbreak with actions for the second one?


If you read the fine article, you'll see that the approach includes a non-LLM controller managing structured communication between the Privileged LLM (allowed to perform actions) and the Quarantined LLM (only allowed to produce structured data, which is assumed to be tainted).

See also CaMeL https://simonwillison.net/2025/Apr/11/camel/ which incorporates a type system to track tainted data from the Quarantined LLM, ensuring that the Privileged LLM can't even see tainted _data_ until it's been reviewed by a human user. (But this can induce user fatigue as the user is forced to manually approve all the data that the Privileged LLM can access.)


"Structured data" is kind of the wrong description for what Simon proposes. JSON is structured but can smuggle a string with the attack inside it. Simon's proposal is smarter than that.


One would have to be relatively invisible.

Non-deterministic security feels like a relatively new area.


Yes they can


Hmm so we need 3 LLMs


Doesn't help.

https://gandalf.lakera.ai/baseline

This thing models exactly these scenarios and asks you to break it, its still pretty easy. LLMs are not safe.


That's just an information bottleneck. It doesn't fundamentally change anything.


In the future, any action with consequence will require crypto-withdrawal levels of security. Maybe even a face scan before you can complete it.


Ahh technology. The cause of, and _solution to_, all of life’s problems.


This has been my experience as well, but there are plenty of assertions here that are not always true, e.g. "AI coding tools are sophisticated enough (they are not) to fix issues in my projects" … but how do you know this if you are not constantly checking whether the tooling has improved? I think for a certain level of issue AI can tackle it and improve things, but there's only a subset of the available models and of a multitude of workflows that will work well, but unfortunately we are drowning in many that are mediocre at best and many like me give up before finding the winning combination.


You omitted “with little or no supervision”, which I think is crucial to that quote. It’s pretty undisputed that having an AI fix issues in your code requires some amount of supervision that isn’t negligible. I.e. you have to review the fixes, and possibly make some adjustments.


I really like the idea of seeking a no (e.g. let me know if I shouldn't go ahead) but as soon as I add something like, "I will do this on this date, unless I hear otherwise", is a little aggressive feeling. It might be easy enough to simply mention the time the work will take place, but leave it unspoken that they could decide it's best to not proceed, "I should get it done around this time". Then again, it's been a goal of mine forever to be assertive. Cowing only takes you so far.


It's just a matter of phrasing. "Hi, I wanted to give you a heads up that XYZ needs doing, and I'll be doing it on Wednesday. Let me know if that doesn't work."


If those 4 aspects are used to judge whether "fair use", I'd say that's the nail in the coffin, because of course it isn't fair use and that's totally fair. Here I was thinking "transformative" was somehow a sticking point in all this.


Yeah absolutely taking artificial sweetener can't cause a glucose spike itself (no glucose or minimal amount to be derived), but maybe it could contribute to spikier glucose in general (due to sweetness contributing to hormonal dis-regulation/lack of satiety and overeating)


When I researched it in the past I thought that multiple studies corroborated that while blood sugar doesn't increase from drinking artificially sweetened drinks that people who drink them still tend to gain weight. I'm not sure how those studies adjusted for things like people that already have metabolic syndrome who simply choose artificial sweetener for health reasons though?

It seems the most I would be comfortable concluding from recent reviews of studies is that there are some worrying findings, enough to warrant caution. If you can simply reduce your consumption of sugary foods and beverages I suspect it will reduce your craving better than a replacement stimuli. You can review some studies here:

https://pubmed.ncbi.nlm.nih.gov/?term=obesity+artificial+swe...

It's a fallacy to draw conclusions from the number of studies/reviews supporting a given hypothesis, but the majority conclude that artificial sweeteners are associated with negative health effects and not a helpful tool for adiposity-related diseases.


There's just no way I buy that I could safely make a change in a 100 loc function and know that there won't be an impact 30 lines down, where with a few additional function you can define the shape of interactions and know that if that shape/interface/type is maintained that there won't be unexpected interactions. Its a balance though as indirection can also readily hide and obscure interactions or add unnecessary glue code that also takes up mental bandwidth and requires additional testing to confirm.


What's the alternative in front-end? I had assumed those things were needed to essentially reverse engineer the web to be more reactive and stateful? Genuinely want it to be simpler.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: