Hacker News new | past | comments | ask | show | jobs | submit | more ninetyninenine's comments login

Not all but many kids are destined to hate what you’re good at.

I don’t know about small talk but a single core system is orthogonal to a highly concurrent system which actors represent.


Doesn’t change the fact that Smalltalk wasn’t actor-based nor highly concurrent, which is what we are talking about here.

https://news.ycombinator.com/item?id=18333234


It's not a problem of interpretation or visualization or charts. People are talking about it as if it's deception or interpretation but the problem is deeper than this.

It's a fundamental problem of reality.

The nature of reality itself prevents us from determining causality from observation, this includes looking at a chart.

If you observe two variables. Whether those random variables correlate or not... there is NO way to determine if one variable is causative to another through observation alone. Any causation in a conclusion from observation alone is in actuality only assumed. Note the key phrase here is: "through observation alone."

In order to determine if one thing "causes" another thing, you have to insert yourself into the experiment. It needs to go beyond observation.

The experimenter needs to turn off the cause and turn on the cause in a random pattern and see whether that changes the correlation. Only through this can one determine causation. If you don't agree with this, think about it a bit.

Also note that this is how they approve and validate medicine... they have to prove that the medicine/procedure "causes" a better outcome and the only way to do this is to actually make giving and withholding the medicine as part of the trial.


I find the definition of causality that places it squarely in the realm of philosophy to be a dead end or perhaps a circle with no end or objective or goal.

"What does it mean that something is caused by something else?" At the end of it all, what matters is how it's used in the real world. Personally I find the philosophical discussion to be tiresome.

In law, "to cause" is pretty strict: "but for" A, B would not exist or have happened. Therefore A caused B. That's one version. Other people and regimes have theirs.

This is why it's something I try to avoid.

In any case, descriptions of distributions are more comprehensive and avoid conclusions.


I'm not talking about philosophy. Clinical trials for medicine use this technique to determine causality. I'm talking about something very practical and well known.

It is literally the basis for medicine. We literally have to have a "hand in the experiment" for clinical trials to with-hold medicine and to give medicine in order to establish that medicine "causes" a "cure". Clinical trials are by design not about just observation.

Likely, you just don't understand what I was saying.


I believe I understood what you were saying.

The criteria or definition for " A causes B" that you alluded to is a useful one in the context of medicine:

> The experimenter needs to turn off the cause and turn on the cause in a random pattern and see whether that changes the correlation. Only through this can one determine causation. If you don't agree with this, think about it a bit.

It's useful because it establishes a threshold we can use and act on in the real world.

I think there is more nuance and context here though. In clinical trials, minimum cohort sizes are required, possibly related or proportional to power analysis (turning on and off the cause for one person doesn't give us much confidence but for 1000 people gives much more).

So the definition of causes for clinical trials and medicine hinges on more than just turning on and off, it relies on effect size and population size in the experiment.

Going back to TFA, this is the problem when we bring "cause" into the discussion: the definition of it varies depending on the context.


> So the definition of causes for clinical trials and medicine hinges on more than just turning on and off, it relies on effect size and population size in the experiment.

Of course. Because the clinical trial is statistical so the basis of the trial is trying to come to a conclusion about a population of people via a sample. That fact applies to both correlation or causation. Statistics is like an extension from person to people… rather then coming to a conclusion about something for one person you can do it for a sample of the population.

Causality is independent of the extension. You can measure causality against a sample of a population or even a single thing. The property of inserting yourself into an experiment still exists in both cases. This is basic and a simple thought experiment can determine this.

You have two switches two lights and two people. Both people turn each of their respective switches on and off and the light turns on and off in the expected pattern exactly like the state of the switch.

You know one of the switches is hooked to the light and “causes” the light to turn on and off. The other switch is BS and is turning on and off on some predetermined pattern and the person that’s flipping the related switch memorized the pattern and is making it look like the switch is causative to turning on or off the light.

How do you determine which one is the switch that is causative to the light turning on and off and which switch isn’t?

Can you do it through observation alone? Can you just watch the people flip the switch? Or do you have to insert yourself into the experiment and flip both switches randomly yourself to see which one is causal to the light turning on or off?

The answer is obvious. I’m sort of anticipating a pedantic response where you just “observe” the wiring of the switch and the light to that I would say I’m obviously not talking about that. You can assume all the wiring is identical and the actual mechanism is a perfect black box. We never actually try to determine causality or correlation unless we are dealing with a black or semi black box so please do not go down that pedantic road.

You should walk away from this conversation with new intuition on how reality works.

You shouldn’t be getting to involved in mathematical definitions and details of what involves a clinical trial or pedantic details and formal definitions.

Gain deep understanding of why causality must be determined this way and then that helps you see why the specific detailed protocols of clinical trials were designed that way in the first place.


I'd say this is generally true, but in practice there are a decent number of cases where some reasoning can give you a pretty good confidence one way or another. Mainly by considering what other correlations exist and what causal relationships are plausible (because not all of them are).

(I say this coming from an engineering context, where e.g. you can pretty confidently say that your sensor isn't affecting the weather but vice-versa is plausible)


This is true fundamentally. It is not general. It is a fundamental facet of reality.

In practice it’s hard to determine causality so people make assumptions. Most conclusions are like that. I said this in the original post that conclusions from observation alone must have assumptions made. Which is fine given available resources. If you find people who smoke weed have lower iq you can come to the conclusion that weed causes iq to lower assuming that all smokers of weed had average iq before smoking and this is fine.

I’m sure you’ve seen many causative conclusions redacted because of incorrect assumptions so it is in general a very unreliable method.

And that’s why in medicine they strictly have to do causative based testing because they can’t afford to have a conclusion based off of an incorrect assumption.


Sorry, I meant in general in the broader sense that you were intending (i.e., I agree). And if you really get down to brass tacks, it's not obvious you can actually do truly causative based tasting. (See e.g. superdeterminism, which posits that you fundamentally can't as a way to explain quantum weirdness in physics).

I find red dead redemption 2 more impressive. I don’t know why. It sounds stupid but S3 on the surface has the simplest api and it’s just not impressive to me when compared to something like that.

I’m curious which one is actually more impressive in general.


Simple to use from an external interface yes, the backend is wildly impressive.

Some previous discussion https://news.ycombinator.com/item?id=36900147


AWS has said that the largest S3 buckets are spread over 1 million hard drives. That is quite impressive.


Red dead redemption 2 is likely on over 74 million hard drives.


I think you misunderstood. They're not saying S3 uses a million hard drives, they're saying that there exist some large single buckets that use a million hard drives just for that one bucket/customer!


actually data from more than one customer would be stored on those million drives. But data from one customer is spread over 1 million drives to get the needed IOPs from spinning hard drives.

There's likely over a trillion active SQLite databases in use right now.


> S3 on the surface has the simplest api and it’s just not impressive [...]

Reminded of the following comment from not too long ago.

https://news.ycombinator.com/item?id=43363055


That's the strangest comparison I have seen. What axis are you really comparing here? Better graphics? Sound?


Complexity and sheer intelligence and capability required to build either.


And what is the basis for your claim? You are not impressed by AWS's complexity and intelligence and capability to build and manage 1-2 zettabytes of storage near flawlessly?


Im more impressed by red dead redemption 2 or baldurs gate 3.

There is no “basis” other my gut feeling. Unless you can get quantified metrics to compare that’s all we got. For example if you had lines of code for both, or average IQ. Both would lead towards the “basis” which neither you or I have.


They said that for self driving cars for over 10 years.

10 years later we now have self driving cars. It’s the same shit with LLMs.

People will be bitching and complaining about how all the industry people are wrong and making over optimistic estimates and the people will be right. But give it 10 years and see what happens.


From what I remember full self driving cars were a couple years off in 2010.

It took 10-15 years to get self driving cars in a specific country under specific weather conditions. A country that is perhaps the most car-friendly on the planet. Also, there are always people monitoring the vehicles, and that take control sometimes.

How many more years for waymo Quito or waymo Kolkata? What about just $europeanCapital?

Same with LLMs, I'm sure in 10 years they'll be good enough to replace certain specific tasks, to the detriment of recent graduates, especially those of artistic aspiration. Not sure they'll ever get to the point where someone who actually knows what they're doing doesn't need to supervise and correct.


I am quite confident that a normal 16 year old will can still drive in 6 inches of snow better than the most advanced AI driven car. I am not sure the snow driving bit will ever be solved given how hard it is.


If you’ve never ridden in one I would try it. AI is a better driver then uber in general ask anyone who’s done both. There’s no snow where I live so it’s not a concern for me, you could be right about that.

But trust me in the next 6 months ai driving through snow will be 100% ready.


> But trust me in the next 6 months ai driving through snow will be 100% ready.

I’ll believe it when I see Waymo expand into Buffalo or Syracuse.

Driving on unplowed roads with several inches of snow is challenging, sometimes you can’t tell where the road stops and the curb/ditch/median starts. Do you follow the tire tracks or somehow stay between the lane markers (which aren’t visible due to the snow)?


Over and over again this pattern of theorizing:

"I am not sure that AI will ever be able to do XYZ given how hard of a problem it is."

Proves to be incorrect in the long run.


We must know very different 16-year olds.


We only have good self driving cars with lidar and extreme pre-mapping steps. Which is fine but per some billionaire car makers’ metrics that’s not even close to good enough. And the billionaire’s cars have a tendency to randomly drive off the road at speed.


They can improve. You can make one adjust its own prompt. But the improvement is limited to the context window.

It’s not far off from human improvement. Our improvement is limited to what we can remember as well.

We go a bit further in the sense that the neural network itself can grow new modules.


It's radically different from human improvement. Imagine if you were handed a notebook with a bunch of writing that abruptly ends. You're asked to read it and then write one more word. Then you have a bout of amnesia and you go back to the beginning with no knowledge of the notebook's contents, and the cycle repeats. That's what LLMs do, just really fast.

You could still accomplish some things this way. You could even "improve" by leaving information in the notebook for your future self to see. But you could never "learn" anything bigger than what fits into the notebook. You could tell your future self about a new technique for finding integrals, but you couldn't learn calculus.


Ai is the future. Face the truth.

Human creativity is cheapened by AI because AI can produce work faster, equal and at many times superior to human output.

This isn’t something that can be changed. It’s reality and it’s not necessarily a good reality but it’s something you have to accept.

The procreate title, the whole marketing idea, even the article itself could have been generated by AI and we would be none the wiser.

I can use ai generated art and bullshit and say I used procreate and no one would know.

This ad reads like an ad for typewriters at the dawn of ms word. It’s just everything is happening 10x faster and it’s all much more tragic as humanity is losing much more here.


The best mech game that I have yet to see replicated is the first virtual on.

Somehow the mechanics were just right that you could pull off these rocket power side dashes to do these incredible dodges.

Not even armored core was able to imitate that effect.


> armored core huh not even armored For answer ?

https://www.youtube.com/watch?v=5NOVvZunHRs

;p


https://youtu.be/8W1uwaNNR04?si=i0szM3hNmSVmXGJP

I feel like I’m piloting an mech and dodging actual projectiles

The one you shared feels like a flight simulator.


Fail them. Only let the ai generated text that has been verified and edited to be true to pass.

If they want to use AI make them use it right.


This is universal. I’ve had largely the same experience. There’s several reasons for this.

1. Stupider people are better teachers. Smart people are too smart to have any empathic experience on what it’s like to not get something. They assume the world is smart like them so they glaze over topics they found trivial but most people found confusing.

2. They don’t need to teach. If the student body is so smart then the students themselves can learn without teaching.

3. Since students learn so well there’s no way to differentiate. So institutions make the material harder. They do this to differentiate students and give rankings. Inevitably this makes education worse.


It's simpler than that. "Prestigious" universities emphasize research prestige over all else on faculty. Faculty optimize for it and some even delight in being "hard" (bad) teachers because they see it as beneath them.

Less "prestigious" universities apply less of that pressure.


It can also be different within the same university, by department. I graduated from a university with a highly ranked and research oriented engineering department. I started in computer engineering which was in the college of engineering but ended up switching to computer science which was in the college of arts and sciences. The difference in the teachers and classroom experience was remarkable. It definitely seemed like the professors in the CS department actually wanted to teach and actually enjoyed teaching as compared to the engineering professors who treated it like it was wasting their time and expected you to learn everything from the book and their half-assed bullet point one way lectures. Unfortunately or fortunately, depending on your view, it also meant having to take more traditional liberal arts type electives in order to graduate.


I did once have a Physics lecturer say " When I took Quantum Mechanics back in my undergrad, I got an A but didn't actually understand anything" and then in the same lecture 20 minutes later: "What part of this do you not understand?" when the entire class was just blankly looking at the whiteboard.


At least at the undergrad level, it's not impossible to get an "A" without actually learning anything. Especially Freshman/Sophomore level classes. You just cram for the exams and regurgitate what you memorized. Within a few months time it's mostly gone.


Seriously, what so non-understandable in first 20 minutes of QM?


Probably depends on how it’s explained, no?

I could make arithmetic incomprehensible, let alone QM.


They never implied it was the first 20 minutes of the entire course


This was about halfway through the semester.

Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: