Hacker Newsnew | past | comments | ask | show | jobs | submit | exe34's commentslogin

The Cynic on the other hand knows how to enjoy life with just enough. He is free, a spy for the gods.

But he keeps writing and talking about people who have more than enough, and how they are wrong.

He points out their foolishness - he has what they will never have. Enough.

It takes no imagination or insight to see reasons why something wouldn’t work. It’s the default mental pathway for every risk-averse beast. Skepticism is not born out of contentment and abundance but out of self-preservation. It’s not correlated with feeling enough, but with feeling bitterness and envy of those who took risks and gained an advantage instead of suffering consequences.

People who are content feel less need to take risks by accepting dubious statements without proof. They have what they need so why risk it for more?

Sceptical people will be grounded by what we know to be true. They will explore new ideas but will not be swept up by them. We need people like that or we'll waste our time on flights of fancy. But we need the irrational optimists to explore new ideas too. It's a classic exploration vs exploitation trade-off.


Many people who have risked their money by placing it on Bitcoin likely had enough, and they risked the extra money that they had lying around. Why not place bets on something you think might be probable? Is there something morally wrong in making some extra buck? Is it morally superior just to keep your money lying on bank account or what?

To be honest I don't think the skeptical people thought bitcoin's success was probable and that's why they didn't bet on it. It's not really anything to do with them being content with what they have.

But it could be this too in some cases.

Some people do things unless they find a reason not to but so a skeptical person will only do things if they find a reason.

People who really feel they have enough might not see any reason to spend their time or effort placing bets, even on things they think are probable. But I don't think many people think that way.


To have enough by your definition and to feel like one has enough are two very different ideas of enough.

The Cynic has enough if he has his cloak and found some food in the garbage can. He feels like he has enough. You might feel like that's not enough.

Conversely I might think the richest man in the world (by net worth) has enough. He feels like he needs more.


I'm pretty sure these peeps who hang out at /r/buttcoin are going to work like regular people to get some fiat currency to their beloved government blessed bank accounts. So I guess they don't feel like they have enough.

I have no idea what a buttcoin is, sorry.

> It takes no imagination or insight to see reasons why something wouldn’t work. It’s the default mental pathway for every risk-averse beast.

Quite the opposite: it takes a lot of strong will and risk to talk against a hype. A kind of risk-affinity that unluckily rarely makes you rich. :-(


that's an original definition, you should publish it in a journal of psychology!

The old "oh I just came up with that exact set of necessary and sufficient specs" in agile meetings.

if they want private information, they should buy it on the open market like every other company!

I have autism and I like using that kind of comparison when writing.

I suspect mental health issues are a big glowing neon sign that says "bully me".

Yep. Bullies are generally looking for a response. Someone who can deal with bullying in a level headed and appropriate manner isn't an interesting or easy target. Someone who "freaks out" is fun and interesting to torment, and their response is more likely to bias authority figures against them and insulate the bully from consequences and even twist the bully into the victim.

even worse, the local services have also been gutted and lobotomized, so once one of these outbreaks gets to CONUS, they'll be praying hard and dropping like flies.

Living life alone makes your life shorter, but it sucks enough that it'll feel a lot longer anyway, so it balances out!

I'm no longer living my life alone, but I greatly cherish the time that I did.

I find being alone wonderful.


"Life is very long when you're lonely."

-Steven Patrick Morrissey


Being alone and being lonely are not the same thing.

BINGO!

(this always gets brought up every single time, like a mantra that people have to keep saying to convince themselves. So it's on my bingo card.)


https://vimeo.com/384844632

If you're going to talk shit don't end your sentence with a conceptual preposition. You're making us all look bad."

#AFVP


and for those of us who are not fluent in gibberish?

my nixos with just xmonad works really well, I haven't noticed any degradation in the last 10 years of updates.

NixOS looks super cool, but it also looks too much like actual work. As a FreeBSD main for two decades, I've played that game already and have the (sadly, now long dead) tinderbox and poudriere installations to prove it.

Unfortunately, whenever you try to exclude LLMs from the holy church of intelligence based on what they can't do, you end up excluding a whole lot of humans too.

That's underestimating human intelligence.

Even low-IQ humans can in principle learn how to use Lean to represent a multivariate system. It might take a while, but in principle their brain is capable of that feat. In contrast, no matter how long I sit down with ChatGPT or Gemini or whatnot, it won't be able to. Because they are not intelligent.

It's a great achievement of the AI hype that the burden of proof has been reversed. Here I am, having to defend my claim that they are not intelligent. But the burden of proof should be on those claiming intelligence. The claim that earth is a sphere was extraordinary and needed convincing evidence. The claim that species have evolved through evolution was. But the claim that LLMs are intelligent is so self-evident that rejecting the idea needs evidence? That's upside-down!


> Here I am, having to defend my claim that they are not intelligent.

It's because all you've done is made a claim without any evidence. Someone pointed out a challenge that most claims about them not being intelligent can't submit any evidence that can't also be met by an LLM.

But instead of submitting any evidence to support your claim, you descended into hyperbole about how hard done by you are being expected to support your claims.

In science, it's okay to say we don't know. The amount of disagreement - even amongst smart people - about if LLMs are intelligent or not, suggest to me that we just don't have universally accepted research and definitions that are tight enough to decisively say.

But you're talking not only like you _know_ the answer for sure, so much that you don't need to support it with evidence or credentials, because those who disagree are obviously just poor victims of the AI hype machine.

Please make sure you pass your knowledge of your LLM discoveries onto the scientific community, you could change the world!


Did you read my post? The claim is "LLMs are intelligent". And instead if requiring evidence for that, apparently most folks (including you) are fine with just accepting that claim and require evidence if somebody questions this. That's what I'm doing.

It's like a religion. "God exists" is the claim. Nobody needs to provide evidence that this is not the case. LLMs are intelligent" is the claim. Nobody needs to provide evidence against that. In either case, the burden of proof is with the one making the claim.

> In science, it's okay to say we don't know

But that's not what's happening. LLMs are called "AI". You know what the I stands for, right? It's not "artificial we-don't-know-if-intelligent".


No you're not questioning it, you're making statements against it, without evidence, which is just as useless as the statement without evidence.

Like I suggested, like the person you responded to suggested: when science tries to prove or disprove LLM intelligence it generally descends into disagreements about definition or evidence (neither of which you provided).

The reason why no evidence was provided for the original LLMs are intelligent claim is because - if you read through this thread - you were the first to make that claim, and the counter claim.

> But that's not what's happening. LLMs are called "AI". You know what the I stands for, right? It's not "artificial we-don't-know-if-intelligent".

I don't know what you want me to say here - if you want to continue acting like there's some widely accepted and agreed on definition of intelligence that everyone who isn't you is an idiot for not knowing, then carry on.

I don't have a reliable definition for intelligence. Like I said, if you do, please share your finding with science, you could settle some fairly big debates and change the world in a meaningful way.


My claims:

1. LLMs are widely called "intelligent". Evidence for my claim: The term "artificial intelligence" that is used everywhere. It has its own TLD.

2. There is no evidence that this terminology is applicable. Questioning it faces some variant of "well do you have evidence to the contrary?". Evidence for my claim: This thread.

You are welcome to disprove my claims, as in the scientific spirit that you say you uphold.


> The term "artificial intelligence" that is used everywhere. It has its own TLD.

That's the country code TLD for Anguilla: https://en.wikipedia.org/wiki/.ai


Sure. The top 100 domains with that TLD still have little to do with Anguilla.

the way it goes is that a device/program of some sort displays a broad number of behaviours that previously was believed to be a future sign of artificial intelligence (e.g. somewhat coherent text, Turing test, giving the impression of following instructions and arriving at the correct results, etc).

some people claim this is artificial intelligence of a lower quality than humans, and these people expect that such mechanisms will eventually match and then potentially surpass humans.

then there's another crowd coming along and claiming no, this isn't intelligence at all, for example it can't tie its shoelaces.

my point was that every time you try to say that no this can't be what intelligence means, it needs to do X, I can find a human who can't do X, no matter how many years you might try to coach them. (for example, I will never be a musician/composer. I simply lack the gene.)

The retort is always "oh but in principle a human could do this". well, maybe next year's LLM will do it in practice, not just in principle, for all I know.

As they say, person who says it can't be done should not stop person doing it.

Heavier than air flight was once thought to be impossible. As long as you don't have a solid mathematical theorem that says only carbon replicators born from sexual intercourse can be intelligent, I expect some day silicon devices will do everything carbon creatures can do and more.


> my point was that every time you try to say that no this can't be what intelligence means, it needs to do X, I can find a human who can't do X,

Indeed, the point you are making is reasonable. But I'm trying to say that the premise is wrong. Nobody should be expected to come up with a reason why it is not intelligence. We should expect to be presented with evidence that it is intelligence. Absent that, the null hypothesis is that it isn't, just like any other computer program before isn't, uncontroversially.

I'm sure you already got my point, apologies for repeating it, but some clarification to clearly carve out our points may not hurt.


>Nobody should be expected to come up with a reason why it is not intelligence.

I'm not asking anybody to come up with a reason why it's not intelligence. I'm telling people they're wrong when they do try to justify calling it not intelligent. if you want to gatekeep a word, you should at least try to define it and then stick to the definition.

It is intelligence in the sense that if somebody had described it in 2010, we would have said yes, that's intelligence and it's hundreds of years away. It isn't intelligence in the sense that it's now here and we've found holes in the story.

Intelligence is so poorly defined that it's an ever receding finish line that somehow we're supposed to cross before we can call the device intelligent.

As Dennett said, it's like magic. Magic that is possible is just tricks. Real magic is that which is impossible.


IDK, they look intelligent, like the world looks flat.

As opposed to most humans? Have you tried reasoning with somebody "just trying to do my job sir"?

> Even low-IQ humans can in principle learn how to use Lean to represent a multivariate system.

That's an article of faith. In principle, elephants can fly at least once.


Only if you're pedantic about it. I find I can arrive at all sorts of absurd conclusions like that by being extremely pedantic.

It's almost as if thinking carefully about words leads to the realisation that words are approximations that are meant to describe, not prescribe.

If "intelligence" describes LLMs then it isn't doing a very good job.

try the bottom half of the population!

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: