Hacker News new | past | comments | ask | show | jobs | submit | more eximius's comments login

So, addition under a particular elliptic curve is isomorphic to multiplication under integer modulo groups...

But for some reason the author keeps referring to the underlying group as "Diffie-Hellman" instead of the process operating on some group.

The result is interesting, the journey to get there was very confusing.


For you, perhaps. For me, the former is denser, but crossing into a "too dense" region. The JSON has indentation which is easy on my poor brain. Also, it's nice to differentiate between lists and objects.

But, I mean, they're basically isomorphic with like 2 things exchanges ({} and [] instead of (); implicit vs explicit keys/types).


Yeah. I don’t even blame S-expressions. I think I’ve just been exposed to so much json at this point that my visual system has its own crappy json parser for pretty-printed json.

S expressions may well be better. But I don’t think S expressions are better enough to be able to overcome json’s inertia.


even as a fan of s-expressions (see my other comment), i have to agree. but the problem here is the formatting. for starters, i would write the s-expression example as:

    (urn:ietf:params:acme:error:malformed
     (detail      "Some of the identifiers requested were rejected")
     (subproblems ((urn:ietf:params:acme:error:malformed
                    (detail     "Invalid underscore in DNS name \"_example.org\"")
                    (identifier (dns _example.org)))
                   (urn:ietf:params:acme:error:rejectedIdentifier
                    (detail     "This CA will not issue for \"example.net\"")
                    (identifier (dns example.net))))))
the alignment of the values makes them easier to pick out and gives a visual structure

but, i would also argue that the two examples are not equivalent. what is explicitly specified as "type" and "value" in the json data, is implied in the s-expression data. either format is fine, but it would be better to compare like for like:

an s-expression equivalent for the json example would look like this:

    ((type   urn:ietf:params:acme:error:malformed)
     (detail "Some of the identifiers requested were rejected)
     (subproblems 
       ((type   urn:ietf:params:acme:error:malformed)
        (detail "Invalid underscore in DNS name \"_example.org\"")
        (identifier
          (type  dns)
          (value _example.org)))
       ((type   urn:ietf:params:acme:error:rejectedIdentifier)
        (detail "This CA will not issue for \"example.net\"")
        (identifier
          (type  dns)
          (value example.net)))))
or the reverse, a json equivalent for the s-expression example:

    {
      "urn:ietf:params:acme:error:malformed":
      {
        "detail":     "Some of the identifiers requested were rejected",
        "subproblems": 
        [
          "urn:ietf:params:acme:error:malformed":
          {
            "detail":     "Invalid underscore in DNS name \"_example.org\"",
            "identifier": 
            {
              "dns": "_example.org"
            }
          },
          "urn:ietf:params:acme:error:rejectedIdentifier"            
          {
            "detail":     "This CA will not issue for \"example.net\"",
            "identifier":
            {
              "dns": "example.net"
            }
          }
        ]
      }
    }
a lot of the readability depends on the formatting. we could format the json example more dense:

    {"urn:ietf:params:acme:error:malformed": {
      "detail":     "Some of the identifiers requested were rejected",
      "subproblems": [
        "urn:ietf:params:acme:error:malformed": {
          "detail":     "Invalid underscore in DNS name \"_example.org\"",
          "identifier": {
            "dns": "_example.org" }},
        "urn:ietf:params:acme:error:rejectedIdentifier": {
          "detail":     "This CA will not issue for \"example.net\"",
          "identifier": {
            "dns": "example.net" }}]}}
doing that shows that the main problem that makes json harder to read is the quotes around strings.

because if we spread out the s-expression example:

    (urn:ietf:params:acme:error:malformed
      (detail      "Some of the identifiers requested were rejected")
      (subproblems
        ((urn:ietf:params:acme:error:malformed
           (detail     "Invalid underscore in DNS name \"_example.org\"")
           (identifier 
             (dns _example.org)
           )
         )
         (urn:ietf:params:acme:error:rejectedIdentifier
           (detail     "This CA will not issue for \"example.net\"")
           (identifier
             (dns example.net)
           )
         )
        )
      )
    )
that doesn't add much to the readability. since, again, the primary win in readability comes from removing the quotes.


Yes and no. It seems obvious it was the advertisement but I know people who voted for Trump that are otherwise fairly liberal. They were either grossly uninformed, misinformed, or simply _didn't believe_ the reporting about various issues.

The last is the most frustrating to me because there is a hint of the truth there - the stuff reported about Trump _is_ insane. They're doing things so openly and brazenly that there are kneejerk reactions to either ask "is it really so bad if they're doing it in the open" or "surely the reporting must be a lie because no one would be that shameless".


I'm not buying it. The guy was president for 4 years, tried to steal an election, and before all of that, challenged Obamas eligibility based entirely on his name and the color of his skin.

Being "grossly uninformed" is no excuse anymore.


I don't disagree. I'm furious with these people. They're close to me.

They aren't stupid... just not paying attention and skeptical due to a combination of propaganda (fake news!) and rightful incredulity at the state of things.

But I can't excuse them.


Shouldn’t voters at least try in good faith to inform themselves? How else can we expect democracy to work?

For example - The day after Brexit - so many people regretted voting to leave. They could’ve thought about it 24 hours earlier, no? “I was misinformed, uninformed” sounds lazy and shallow, isn’t it? How hard can it be to spend an hour less on Netflix and an hour more learning about what’s on the ballot?


Dude's last major act was to turn a mob loose on Congress in order to get SCOTUS to repeat 2000. It wasn't obscure news.

Anyone pikachufacing here is a liar.


As I read the paper, you would be able to detect it in a couple of ways

1. possibly high loss where the models don't have compatible embedding concepts 2. given a sufficient "sample" of vectors from each space, projecting them to the same backbone would show clusters where they have mismatched concepts

It's not obvious to me how you'd use either of those to tweak the vector space of one to not represent some concept, though.

But if you just wanted to make an embedding that is unable to represent some concept, presumably you could already do that by training disjoin "unrepresentable concepts" to a single point.


This really feels like trying to use Go for a purpose that it is inherently not designed for: absolute performance.

Go is a fantastic case study in "good enough" practical engineering. And sometimes that means you can't wring out the absolute max performance and that's okay.

It's frustrating, but it is fulfilling it's goals. It's goals just aren't yours.


After using Go for large and medium projects, with hundreds and < 10 people respectively, I’m left with a desire for a better language.

I enjoy the simplicity of Go, but the tradeoff of Go’s imposed limitations (concurrency model, no subtype polymorphism in place of parametric polymorphism, no methods on external types, etc) feel like strict limitations without much benefit. (Minus labels and gotos, that’s great design)

The speed I don’t care about

The model of loose global functions for fundamental operations on built in data types is poor design. append(), delete(), close(), etc. How about make these methods on their respective types?

Overall coding in Go feels like coding in a pre-alpha language. Syntax-wise, not the top notch tooling and packaging.


I drink chai lattes instead of coffee. Never could get on board with the bitter, brown bean water.


Chai from a coffee shop or mix is a sugar drink, which is why it is delicious.


Completely true, but comparable with lots of other coffee drinkers habits. I make mine half chai concentrate from a local restaurant and half 2% milk in small portion sizes of about 120 calories, so I don't feel _too_ bad about it.

I might have slightly more sugar than is strictly recommended DV, but not by much.


Hopefully he means milk tea. Which is indeed great, but so is a good coffee.


I have no problem with 'OpenAI', so much as the individual running it and, more generally, rich financiers making the world worse in every capitalizable way and even some they can't capitalize on.


Again?


I'm very curious what this math pretest looked like, whether it was "proper" high level math or, like, computing some trig problems. Folks who have aptitude with algebra or number theory or topology, I'd expect that to be correlated, but not to rote computational math.


It was arithmetic. The study is n=42 p-hacking nothingness, and then a highly misleading misinterpretation by the junk popsci website.


If you can't stop an LLM from _saying_ something, are you really going to trust that you can stop it from _executing a harmful action_? This is a lower stakes proxy for "can we get it to do what we expect without negative outcomes we are a priori aware of".

Bikeshed the naming all you want, but it is relevant.


> are you really going to trust that you can stop it from _executing a harmful action_?

Of course, because an LLM can’t take any action: a human being does, when he sets up a system comprising an LLM and other components which act based on the LLM’s output. That can certainly be unsafe, much as hooking up a CD tray to the trigger of a gun would be — and the fault for doing so would lie with the human who did so, not for the software which ejected the CD.


Given that the entire industry is in a frenzy to enable "agentic" AI - i.e. hook up tools that have actual effects in the world - that is at best a rather native take.

Yes, LLMs can and do take actions in the world, because things like MCP allow them to translate speech into action, without a human in the loop.


Exactly this. 70% of CEOs say that they hope to be able to lay people off and replace them with an LLM soon. It doesn’t matter that LLMs are incapable of reasoning at even the same level as an elementary school child. They’ll do it because it’s cheap and trendy.

Many companies are already pushing LLMs into roles where they make decisions. It’s only going to get worse. The surface area for attacks against LLM agents is absolutely colossal, and I’m not confident that the problems can be fixed.


> 70% of CEOs say that they hope to be able to lay people off and replace them with an LLM soon

Is the layoff-based business model really the best use case for AI systems?

> The surface area for attacks against LLM agents is absolutely colossal, and I’m not confident that the problems can be fixed.

The flaws are baked into the training data.

"Trust but verify" applies, as do Murphy's law and the law of unintended consequences.


> Is the layoff-based business model really the best use case for AI systems?

No, but it plays well in quarterly earnings. So expect a good bit, I'd say.

> The flaws are baked into the training data.

Not in terms of attack area.

In terms of hallucinations... ish. It's more baked into the structure than the training data. There's increasing amounts of work to ensure well-curated training data. Still, the whole idea of probabilistic selection of tokens more or less guarantees it'll go wrong from time to time. (See e.g. https://arxiv.org/abs/2401.11817)

In terms of safety, there's some progress. CaMeL is rather interesting: https://arxiv.org/abs/2503.18813

> "Trust but verify" applies.

FWIW, that applies to humans too. Nobody is error-free. And so the question becomes mostly "does the value overshadow the cost of the error", as always.


That would still be on whomever set up the agent and allowed it to take action though.


To professional engineers who have a duty towards public safety, it's not enough to build an unsafe footbridge and hang up a sign saying "cross at your own risk".

It's certainly not enough to build a cheap, un-flight-worthy airplane and then say "but if this crashes, that's on the airline dumb enough to fly it".

And it's very certainly not enough to put cars on the road with no working brakes, while saying "the duty of safety is on whoever chose to turn the key and push the gas pedal".

For most of us, we do actually have to do better than that.

But apparently not AI engineers?


Maybe my comment wasn’t clear, but it is on the AI engineers. Anyone that deploys something that uses AI should be responsible for “its” actions.

Maybe even the makers of the model, but that’s not quite clear. If you produced a bolt that wasn’t to spec and failed, that would probably be on you.


As far as responsibility goes, sure. But when companies push LLMs into decision-making roles, you could end up being hurt by this even if you’re not the responsible party.

If you thought bureaucracy was dumb before, wait until the humans are replaced with LLMs that can be tricked into telling you how to make meth by asking them to role play as Dr House.


I see much more of offerings pushing these flows onto the market than actually adopting those flows in practice. It's a solution in search of a problem and I doubt most are fully eating their own dogfood as anything but contained experiments.


> that is at best a rather native take.

No more so than correctly pointing out that writing code for ffmpeg doesn't mean that you're enabling streaming services to try to redefine the meaning of the phrase "ad-free" because you're allowing them to continue existing.

The problem is not the existence of the library that enables streaming services (AI "safety"), it's that you're not ensuring that the companies misusing technology are prevented from doing so.

"A company is trying to misuse technology so we should cripple the tech instead of fixing the underlying social problem of the company's behavior" is, quite frankly, an absolutely insane mindset, and is the reason for a lot of the evil we see in the world today.

You cannot and should not try to fix social or governmental problems with technology.


I really struggle to grok this perspective.

The semantics of whether it’s the LLM or the human setting up the system that “take an action” are irrelevant.

It’s perfectly clear to anyone that cares to look that we are in the process of constructing these systems. The safety of these systems will depend a lot on the configuration of the black box labeled “LLM”.

If people were in the process of wiring up CD trays to guns on every street corner you’d I hope be interested in CDGun safety and the algorithms being used.

“Don’t build it if it’s unsafe” is also obviously not viable, the theoretical economic value of agentic AI is so big that everyone is chasing it. (Again, it’s irrelevant whether you think they are wrong; they are doing it, and so AI safety, steerability, hackability, corrigibility, etc are very important.)


But isn't the problem is that one shouldn't ever trust an LLM to only ever do what it is explicitly instructed with correct resolutions to any instruction conflicts?

LLMs are "unreliable", in a sense that when using LLMs one should always consider the fact that no matter what they try, any LLM will do something that could be considered undesirable (both foreseeable and non-foreseeable).


> If you can't stop an LLM from _saying_ something, are you really going to trust that you can stop it from _executing a harmful action_?

You hit the nail on the head right there. That's exactly why LLM's fundamentally aren't suited for any greater unmediated access to "harmful actions" than other vulnerable tools.

LLM input and output always needs to be seen as tainted at their point of integration. There's not going to be any escaping that as long as they fundamentally have a singular, mixed-content input/output channel.

Internal vendor blocks reduce capabilities but don't actually solve the problem, and the first wave of them are mostly just cultural assertions of Silicon Valley norms rather than objective safety checks anyway.

Real AI safety looks more like "Users shouldn't integrate this directly into their control systems" and not like "This text generator shouldn't generate text we don't like" -- but the former is bad for the AI business and the latter is a way to traffic in political favor and stroke moral egos.


The way to stop it from executing an action is probably having controls on the action and an not the llm? white list what api commands it can send so nothing harmful can happen or so on.


This is similar to the halting problem. You can only write an effective policy if you can predict all the side effects and their ramifications.

Of course you could do like deno and other such systems and just deny internet or filesystem access outright, but then you limit the usefulness of the AI system significantly. Tricky problem to be honest.


It won't be long before people start using LLMs to write such whitelists too. And the APIs.


I wouldn't mind seeing a law that required domestic robots to be weak and soft.

That is, made of pliant material and with motors with limited force and speed. Then no matter if the AI inside is compromised, the harm would be limited.


Humans are weak and soft, but can use their intelligence to project forces much higher than available in their physical body.


I don't see how it is different than all of the other sources of information out there such as websites, books and people.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: