Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The milestone about understanding trade offs really resonated with me. I had been using (what I call) the "Scylla-Charybdis Heuristic":

-Don't trust any model that implies X is too low unless it's also capable of detecting when X would be too high

I've been frustrated by how people can fail this for X=the minimum wage, immigration, ease of getting public assistance (just to show that it's across the spectrum), as well as less political issues like "approaching strangers" or "trying to persuade after a rejection".

There is a very common mentality out there that cannot admit to any downsides to a favored policy. I had often attributed this to "well, they have to be wary of tripping others' poor heuristics", but this may itself be the #1 fallacy Alexander mentions! It can just as well be that the person cannot think in terms of trsdeoffs.



I don't think this is good advice as stated. If you are showing the symptoms of scurvy then your Vitamin C intake is too low. This model tells you nothing about when your vitamin C intake is too high. But it's still valuable.


It depends on the (possibly implicit) model behind the advice. If the advice is "always take more vitamin C", the it's failing the SC Heuristic because it recognizes no downsides to vitamin C or when you would be taking too much; telling this to a scurvy sufferer is only correct by accident. It would fail to notice that eg too much can cause vitamin C poisoning or (if in the form of fruit juice) obesity or vomiting.

If the model says to take between X and Y units ("but I don't know what's causing the harms outside the range"), then it may be a shallow understanding but it's not failing the SCH, and it avoids a common failure mode.


> it's failing the SC Heuristic because it recognizes no downsides to vitamin C or when you would be taking too much; telling this to a scurvy sufferer is only correct by accident. It would fail to notice that eg too much can cause vitamin C poisoning or (if in the form of fruit juice) obesity or vomiting.

Exactly! And yet for centuries it was nevertheless an extremely valuable model to have (I should have said "lemons" rather than "vitamin c").


Which model and which history? People did historically pass this heuristic because they could articulate a standard for when you're bringing too much lemon.

"Hey -- bring lemons on your ships because it stops scurvy"

'Oh, so why not a ton of lemons? Two tons? Ten tons?'

"Well, to stave off scurvy, you only need x units per sailor per day. Beyond that, it's just expensive deadweight."

In contrast, there are policy advocates who want an X to be higher and yet who haven't met the "tradeoff development threshold". Instead of being able to articulate a model which tells you when X is high enough be a net negative, they will show inability to understand the core challenge: "Strawman, no one's advocating 2X". "That would just be absurd." "I didn't say 2X, I said 1.4X."

It's true that if you posit a scenario in which it's physically impossible to steer far enough right to hit Charybdis, then it will looks successful to have the model "steer as far right of Scylla as you can"; but this isn't the general case, and it wouldn't count as an understanding of tradeoffs.


This comment stuck in my craw a bit, so thanks for provoking some thought. :) Your model isn't modeling Vitamin C consumption, right? It's modeling scurvy. I would guess the model could also tell you about when there are too few scurvy symptoms, which would be "never."

"If I'm going to be late, I should walk faster" isn't modeling velocity — it's modeling punctuality.


>Your model isn't modeling Vitamin C consumption, right? It's modeling scurvy.

I don't see it that way, I see it modeling Vitamin C consumption. To me, your assertion that it's modeling scurvy instead feel contrived in order to fit the top-level comment's princple.


To my mind "your teeth are falling out, you should drink more lemon juice" is the same kind of argument as "our gini coefficient is too high, so we should raise the minimum wage" or "our companies are taking too long to fill positions, so we should allow more immigration".


Your model is more complicated than you think.

If someone says "take vitamin C", you don't point blindly at a log chart and thus consume a kilogram of it (which, based on a rat model, probably would kill you).

This is because the practical algorithm for "take Vitamin C" has the grocer, the government, and a team of scientists sign off on the size of a dosage and how many pills are even in a bottle. So while that may not be part of your model, it is most definitely part of the model. The model tells you how much Vitamin C not to take, and so it passes the heuristic.

And without that cap, Vitamin C is unsafe, etc., so the heuristic holds.


It's just an heuristic, something to look for when evaluating a model of the world, not an strict, objective and precise rule.

However, in that example, I'd say that modern medicine certainly has a "too much vitamin C" threshold, and thus might be a sensible model of human vitamin C needs.

I had never seen this idea in writing, but I certainly remember thinking along those lines about retiring age and similar policies.


"Modern medicine" contains a model that includes both criteria for too little and too much vitamin C, but that's not the model that was being referenced.

The point was that the cited model, "scurvy symptoms => too little vitamin C", is useful (in some situations, very useful, if you aren't in possession of a better model with which this one would agree) while being in violation of the maxim given upstream.

I think a much stronger rule - which wouldn't deserve the same catchy name - is that your model should at least be able to say "X is not to low".


I quite like that formulation too.


I tend to avoid people who comes up with simple solutions based on their ideology. Folks who are honest about not having a simple solution makes a much more intelligent impression on me.

Does this comment make me a hypocrite?


It doesn't make you a hypocrite if that's one heuristic you use to judge people among many. There's actually research on expert judgement showing that experts who are able to find lots of "On the other hand" reasons are more likely to produce accurate forecasts.

http://www.chforum.org/library/choice12.shtml


"Folks who are honest about not having a simple solution makes a much more intelligent impression on me."

This works because problems with simple solution don't stay problems for long.

Problem: I'm dehydrated.

Solution: I drink (potable) water.

Problem solved.

(Don't get distracted by all the other potential bad solutions, we're just talking about how this specific simple solution does work.)

If it's a pervasive problem that has plagued humanity since the dawn of civilization, then simple solutions probably won't work, no. I'll cop that I lean libertarian but ultimately the very popular "let's just free market at it" and "let's just government at it" are equally stupid solutions to any hard problem.


What about people who come up with complex solutions based on their ideology?


What about situations where a simple solution would be the most effective and appropriate?

Would you be averse to it just because it is simple?


The solution may be simple, but the reasoning to justify that it is the most effective is probably not.


Potentially, yes? Especially if it's a long-standing complex problem. History is full of oversold simple solutions. Hearing "just do .." should immediately alert you to the possibility that the speaker hasn't understood your situation.


>History is full of oversold simple solutions.

Of course, history is also full of people insisting, "No, hold on, it's far more complicated than that!" and then being totally wrong.

For instance, people spent a very long time believing that a difficult-to-model combination of many different factors produced stomach ulcers. Then an experiment was done, and voila, the real cause was Helicobacter pylori.

Simplicity (or, in fact, regularization) is helpful far more often than it's harmful.


In the case of stomach ulcers it actually is more complicated. Helicobacter pylori is the most common cause, but not the only cause:

http://www.uptodate.com/contents/association-between-helicob...


Also the policy / solutions promoters need to be asked and need to answer the questions:

1) what are the trade-offs 2) what are the potential unintended consequences 3) what happens if the boundary conditions are approached like 60 years later, very few people do X, many people do X


I would say that it would depend. I'm primarily thinking of big picture political questions, and in that general area I feel that simple answers are mostly flawed answers.

However, in engineering, I apply the *nix philosophy of less is more. But this post isn't very technical.


No. Your comment doesn't use any magical concepts for which I have no model. I think I entirely understood it. It's simplicity is incidental but allowed for that. People who come up with simple solutions based on Their Own Ideology are a different thing. Their solutions aren't simple. They're lies that you want to believe contingent on lies you don't know you don't want to believe. Generated mosty for control, of self or others, and never for awareness.

I think it's a good social heuristic.


> magical concepts

Depends on how you define magical concepts. Are the magical because they are too complex and require too many assumptions?

For example, by your definition, is it ok to have faith that would be associated with a religion, if that faith is based on a wide variety of experiences and acceptance of the existence of others worldviews?


The only reason to have an ideology is to use it to produce solutions. You should be wary of solutions that are just too simple for the problem (irrespective of ideology,) or the rejection of solutions based solely on ideology.


Yes, but we are all hypocrites, so don't feel too badly about it.

The problem with not having a simple solution is that humans are wired to take action based on simple solutions. Here's a test for your theory of mind. Can you imagine that someone else thinks, "I know I'm not completely right, but I also know that the best way to lead this group forward is to present a simple solution as if it is right." How would they act?


>-Don't trust any model that implies X is too low unless it's also capable of detecting when X would be too high

I think that's a fairly flaky heuristic. There's no point at which a nation's GDP could be considered "too high", although some of the tradeoffs necessary to increase economic growth beyond a certain point may diminish net utility.

I prefer a heuristic along the lines of "Don't trust any argument that doesn't explicitly state the other side of the cost/benefit tradeoff".


To steel man GP: pareto efficiency[1] is a model which can both tell you if GDP is too low and too high... the thing I like about it most is that rather than a simple metric over the aggregate actions of everyone, it considers individual decisions and individual needs. It respects property rights for example, which "GDP good" doesn't (though a "long term GDP good" position probably does).

[1] https://en.wikipedia.org/wiki/Pareto_efficiency


Perhaps the truth isn't always in the middle tho.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: