Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

True, only that Weiner was surprisingly endearing in that documentary. Despite at the time being a staunch republican, it's hard not to like the guy, and when the fall comes you're almost rooting for him.

But SBF? The last interview I saw him give was I think with Levine; it was shocking because SBF was describing whence crypto derives its value. And Levine remarked that SBF was describing a Ponzi scheme.

For me, that interview showed two things. First, that SBF quite obviously has no morals whatsoever. And second, he's not nearly as smart as he's portrayed to be.

His whole cred derives from his MIT degree combined with his experience at Jane Street. Jane Street is a great firm, but what they do is really specific, and not necessarily the "coolest" way to make money. It's a lot of tech and infrastructure, less finance and trading chops. (The real badasses are RenTech, for example.)

Anyway, I watched a few videos that he made while working at his crypto hedge fund, Alameda. He described the strategies, and they sounded like the typical market maker stuff - arbing exchange discrepancies yadda yadda. Not the most inspired stuff.



I don't think he lacks morals, but he seems to have a Lex Luthor "ends justify the means" way of thinking. If you're whole moral framework is effective altruism, then as long as the good you do with the money you accumulate is greater than the harm caused in its accumulation by a ratio that exceeds your alternatives, you're all good.


It doesn't even have to do that in an objective, measurable sense. It just has to do that from your perspective—meaning that any harms you can ignore don't count, and any potential good you can convince yourself you could do in the future because of it do.


Yeah, I'm glad to see someone saying this. Something that bugs me about EA is it pattern matches to a lot of the same performative behavior I grew up around with evangelical extremists. Your hypothetical good forgives your very real misbehavior in fact.

There's also just the fundamental stupidity of thinking you can reduce something as complex as human morals and ethics to a karmic checking account balance...


One remark I have on anti-"ends justify the means" is they always ignore the ends. The "ends" might literally be saving real people's lives, whose lives have value. I'm sure there are good arguments against the idea, but you must engage with the ends that you propose we forsake, or you're just not serious.


Isn't this a fundamental problem with consequentialism? If you can convince yourself of a serious enough consequence, you can do whatever you want. Steal, hurt, kill. As long as your actions lead to a .000001% chance of preventing Skynet from taking over in the year 50000 AD, any means justify those ends.

How is that something that can be seriously engaged with? Other than trying to talk some sense into these people about limiting their EA calculations to a reasonably observable time-space continuum?


I always felt the problem with consequentialism is that outcomes are uncertain, whereas actions are not. Therefore you are not guaranteed that an immoral action (means) will result in a beneficial outcome (end).

For instance, you push someone in front of a trolley to save 5 lives, but the trolley isn’t stopped and you end up with 6 dead.


That's essentially Kant's argument as to why you should have universal morals (see the classic not lying to an axe murderer).


If curing cancer for the rest of humanity now and forever required knowingly experimenting on and absolutely killing a certain number of people now, how many people could you justify with an Effective-Altruism ethical framework?


Not meaning to be inflammatory but the same scenario exists in flipped terms: how many people would you let die of cancer rather than, idk, eat the downside of running riskier experiments.

There’s no privileged neutral really. Consequentialism just owns that. Now, that might be a separate question from whether society should act as if there’s a privileged neutral (ie be nonconsequentialist).


Isn’t the inverse more, there will never be a cure for cancer, so how many people can you justify killing to confirm that?

All I know is that when humanity faced the decision with respect to WWII medical experiments , the decision was made to not utilize the data, regardless of potential benefits.


Depends on your expected remaining lifetime of the human race? (since from that you might compute the expected cancer deaths over that time and then you can do a simple comparison)


Longtermists simply assume we will harvest Hawking radiation for hundreds of billions of years after the heat death of the universe.


I wouldn't say they ignore it, but rather they categorically have decided that is an unethical justification. E.g. it doesn't matter if experimenting on patients without their consent saves more lives in the future, it is still wrong. Perhaps you might argue that this is an absolute position they can't really hold to for everything. But I think people prefer to find other ways to justify such cases.


You can take 10 healthy organs from one person without any social connections and use the organs to save 10 terminal patients.

See the problem with utilitarianism?

Unlike other hypothetical examples given here (Skynet, cancer cure), this scenario is possible today.


Sometimes when you see a guy like SBF who fits the MIT hacker/ disheveled smart guy quant stereotype a little too perfectly you have to ask yourself “how much of this is an act?” Most real geniuses I know look unremarkable or boring. One place where people are masters of dressing and acting the part to get rich is Wallstreet though.


Rentech is definitely bad [0], not sure about bad-ass but if evading taxes for years and ending up paying $7B is bad-ass, I guess they are.

[0] https://www.reuters.com/business/finance/renaissance-executi...


That article doesn’t remotely suggest that they are “bad”.

The article says they were advised by Deutsche Bank that they could classify certain trades in a way as to pay long-term instead of short-term capital gains taxes.

The IRS disagreed, RenTech went through an appeals process and eventually decided to settle instead of continuing the process.

There’s nothing evil about trying to legally lower your tax obligations and then being told by the IRS “Nope, you can’t do it like that.”


I've always said their tax attorneys contribute the most alpha!


Evidently he wasn't a "true believer"

he's just one player in the ecosystem, trying milking as much as possible

good job building FTX into one of the largest CEX but.... :)


which videos please


Sorry but this reads to me like someone who doesn't really know what they're talking about. First, I think you have totally mischaracterized or misunderstood that interview. Second, he is obviously a smart guy; I'm not claiming he's a unique generational genius but there are plenty of third party validations of intelligence (MIT, Jane Street) even if you can't tell from the way he speaks. Finally the comment "the real badasses are at RenTech" is laughable insofar as they don't hire undergrads. I would say more accurately that many of the real badasses went to PhD programs but that besides that Jane Street was a highly desired career choice for this graduating cohort.


A Phd is a liability not an asset on wall street. Source: my brother was convinced to abandon his PhD ABD by two friends who did the exact same thing a year or two ahead of him.

The real "badasses" if you insist on calling them that, work about a dozen years for a fund, hit their number, and promptly retire to do whatever the hell they want for the rest of their life. You don't read much about them vs the obsessive people who keep playing the game for the high score contest though, because there's no drama to cover.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: