Or just (flip .), which also allows ((flip .) .) etc. for further flips.
In Smullyan's "To Mock a Mockingbird", these combinators are described as "cardinal combinator once/twice/etc. removed", where the cardinal combinator itself defines flip.
As a reference on the volume aspect: I have a tiny server where I host some of my git repos. After the fans of my server spun increasingly faster/louder every week, I decided to log the requests [1]. In a single week, ClaudeBot made 2.25M (!) requests (7.55GiB), whereas GoogleBot made only 24 requests (8.37MiB). After installing Anubis the traffic went down to before the AI hype started.
create x = 10;
time point;
print x; //prints 10 in first timeline, and 20 in the next
create traveler = 20;
traveler warps point{
x = traveler;
traveler kills traveler;
};
My unjustified and unscientific opinion is that AI makes you stupid.
That's based solely on my own personal vibes after regularly using LLMs for a while. I became less willing to and capable of thinking critically and carefully.
It also scares me how good they are in appealing and social engineering. They have made me feel good about poor judgment and bad decision at least twice (which I noticed later on, still in time). New, strict system prompt and they give the opposite opinion and recommend against their previous suggestion. They are so good at arguing that they can justify almost anything and make you believe that this is what you should do unless you are among the 1% experts in the topic.
> They are so good at arguing that they can justify almost anything
This honestly just sounds like distilled intelligence. Because a huge pitfall for very intelligent people is that they're really good at convincing themselves of really bad ideas.
That but commoditized en masse to all of humanity will undoubtedly produce tragic results. What an exciting future...
> They are so good at arguing that they can justify almost anything
To sharpen the point a bit, I don't think it's genius "arguing" or logical jujitsu, but some simpler factors:
1. The experience has reached a threshold where we start to anthropomorphize the other end as a person interacting with us.
2. If there were a person, they'd be totally invested in serving you, with nearly unlimited amounts of personal time, attention, and focus given to your questions and requests.
3. The (illusory) entity is intrinsically shameless and appears ever-confident.
Taken together, we start judging the fictional character like a human, and what kind of human would burn hours of their life tirelessly responding and consoling me for no personal gain, never tiring, breaking-character, or expressing any cognitive dissonance? *gasp* They're my friend now and I should trust them. Keeping my guard up is so tiring anyway, so I'm sure anything wrong is either an honest mistake or some kind of misunderstanding on my part, right?
TLDR: It's not not mentat-intelligence or even eloquence, but rather stuff that overlaps with culty indoctrination tricks and con[fidence]-man tactics.
AI being used to completely off load thinking is a total misuse of the technology.
But at the same time that this technology can seemingly be misused and cause really psychological harm is kind of a new thing it feels like. Right? Like there are reports of AI Psychosis, don't know how real it is, but if it's real I don't know any other tool that's really produced that kind of side effect.
We can talk a lot about how a tool should be used and how best to use it correctly - and those discussions can be valuable. But we also need to step back and consider how the tool is actually being used, and the real effects we observe.
At a certain point you might need to ask what the toolmakers can do differently, rather than only blaming the users.
I mean, if your whole business is producing an endless stream of incorrect output and calling it good enough, why would you care about accuracy here? The whole ethos of the LLM evangelist, essentially, is "bad stuff is good, actually".
I pasted the image of the chart into ChatGPT-5 and prompted it with
>there seems to be a mistake in this chart ... can you find what it is?
Here is what it told me:
> Yes — the likely mistake is in the first set of bars (“Coding deception”).
The pink bar for GPT-5 (with thinking) is labeled 50.0%, while the white bar for OpenAI o3 is labeled 47.4% — but visually, the white bar is drawn shorter than the pink bar, even though its percentage is slightly lower.
So they definitely should have had ChatGPT review their own slides.
funny isn't it - makes me feel like it's kind of over-fitted to try and be logical now, so when it's trying to express a contradiction it actually can't
That would still be basic fail, you don't label a chart, you enter data, the pre-AGI computer program does the rest - draws the bars and slows labels that match the data
This half makes sense to me - 'deception' is an undesirable quality in an llm, so less of it is 'better/more' from their audiences perspective.
However, I can't think of a sensible way to actually translate that to a bar chart where you're comparing it to other things that don't have the same 'less is more' quality (the general fuckery with graphs not starting at 0 aside - how do you even decide '0' when the number goes up as it approaches it), and what they've done seems like total nonsense.
clearly the error is in the number, most likely the actual value is 5.0 instead of 50.0 which matches the bar height and also the other single digit GPT-5 results for metrics on the same chart
I didn't know about this! That's brilliant, thank you for the pointer!
Since the death of LtU I don't really know where to learn about interesting new PL work. I try to occasionally read the POPL submissions but there's nothing like HN for PL.
Reddit's r/programminglanguages is still quite active. Otherwise most of the community switched to Discord, it seems. (I found Par by hopping the Discord servers of "Programming Language Development"->HOC->Vine->Par)
I've been recommended /r/pl a lot but it's not quite the same vibe as LtU. I think LtU was very carefully curated professional research with commentary, while /r/pl has a lot more amateurs asking questions about their hobby languages — which is great, I'm happy that people are experimenting with programming languages, but it does make it hard to use as a way to keep up with the latest big results for someone who doesn't necessarily have the time to follow everything that's going on on the subreddit.
Maybe the Discord is the way to go. The user interface confuses the heck out of me, though. Appreciate the recommendation!
It's interesting how some of these diagrams are almost equivalent in the context of encoding computation in interaction nets using symmetric interaction combinators [1].
From the perspective of the lambda calculus for example, the duplication of the addition node in "When Adding met Copying" [2] mirrors exactly the iterative duplication of lambda terms - ie. something like (λx.x x) M!
I will try to incorporate Hannah's notes style into my own stuff; I am using a similar lecturing device with the tablet, but her presentation is soooo far beyond mine. So beautiful!
In Smullyan's "To Mock a Mockingbird", these combinators are described as "cardinal combinator once/twice/etc. removed", where the cardinal combinator itself defines flip.
reply