Hacker Newsnew | past | comments | ask | show | jobs | submit | more chairhairair's commentslogin

“I spent my formative years running wild in the Caribbean, burning through women as a plow cut through the snow.”


Yeah, there's an introspection threshold there that hasn't yet been reached.


What is the insight you’re looking for? Or what is the category? Honestly curious what the threshold is and how you think it would exhibit.


It seems entirely and obviously self aware to me.


What’s the background here? How can we know they use GPL licensed code? Was there some leak?


Their infotaiment uses a customized Debian distro. On a Model S you could easily get a shell into it, because they used a freaking SSH with a password-based authentication over Ethernet to connect from the instrument cluster to the computer in the central console.

You could sniff the password with a man-in-the-middle attack, if you knew the host key of the instrument cluster. Here's one from my previous Model S: https://gist.github.com/Cyberax/ad9866ab4306d43957dc480db573...


This is a gist created 1 hour ago. No proof of the attack vector. What's the point of posting a private key?

Also, so what if they used Debian? Linux is used on everything. Debian has multiple licenses, it also has BSD3 and others to choose from: https://www.debian.org/legal/licenses/


In case anybody wants it. I can do a more detailed writeup about hacking into my Tesla, but I'm not particularly interested in that. In short, I bought an Tesla instrument cluster on eBay and dumped the NAND chips from it.

They use plenty of GPL software there, including the Linux kernel itself.


Ok, you seem to be implying that just the use of GPL software necessitates the open sourcing of anything you build on it or with it. If that were the case, then all of AWS would be open sourced and all of the server backends built on Ubuntu clusters would have to be open sourced.

As far as I understand, its only "derivative" works that must be open sourced. Not merely building a software program or hardware device on top of a Debian OS. Tesla's control console is hardly a derivative work.


Eh, if they were being compliant and merely building modules ontop of and called by BusyBox, they could get away with Mere Aggregation [0]*, but from a little looking around it looks like they were called out years ago for distributing modified BusyBox binaries without acknowledgement [1] and promised to work with the Software Conservancy to get in compliance. [2]

[0] https://www.gnu.org/licenses/gpl-faq.html#MereAggregation

[1] https://lists.sfconservancy.org/pipermail/ccs-review/2018-Ma...

[2] https://sfconservancy.org/blog/2018/may/18/tesla-incomplete-...

*but I would argue (a judge would be the only one to say with certainty) that Tesla does not provide an infotainment application "alongside" a linux host to run it on, they deliver a single product to the end user of which Debian/BusyBox/whatever is a significant constituent.

(P.S. to cyberax: if you can demonstrate that Tesla is still shipping modified binaries as in [1] I think it would make a worthwhile update to the saga.)


You'd need to post Linux kernel source, though.


Your post reads like Debian is available with multiple licenses including BSD3 This is not true.

The page you posted is a list of licenses various software in the Debian distribution are released with.

Of course the parent's idea that Tesla using Debian means they have to release the source of anything is incorrect.


I would say it is trust worthy because if it were found to be gamed then Anthropic’s reputation would crater.

But, we found out that OpenAI is/was gaming benchmarks (https://news.ycombinator.com/item?id=42761648) and that seems to be forgotten history now - so I don’t know.


> I would say it is trust worthy because if it were found to be gamed then Anthropic’s reputation would crater.

But on the other hand, how would we found out that they've gamed the numbers, if they were gamed? Unless you work at Anthropic and have abnormally high ethics/morals, or otherwise private insight into their business, sounds like we wouldn't be able to find out regardless.


Google’s org chart is packed full of Product Managers and other similar titles that get paid millions to do ~nothing.

The number of L8+ “leaders” and “drivers” is really jaw dropping.


I refuse to believe that anybody is managing Google's products.


I have no clue how these people drive business value and actual software developers implementing those features don’t.


It could be:

foo ?! { bar }

But, now we’re potentially confusing the “negation” and the “exclamation” meanings the bang symbol usually tries to communicate.


I tend to agree that ? looks like "if then" when what we really want is some sort of coalescing, or "if not then".

foo ?? { bar }

foo ?/ { bar }

foo ?: { bar }

foo ?> { bar }

foo ||> { bar }

Im not sure I like the idea at all though. It seems like a hack around a pretty explicit design choice. Although I do tend to agree the error handling boilerplate is kind of annoying.


cron, but completely unreliable. How nice.

LLM heads will say “it’s not completely unreliable, it works very often”. That is completely unreliable. You cannot rely on it to work.

Please product people, stop putting LLMs at the core of products that need reliability.


It's all a matter of degree. Even in deterministic systems, bit flipping happens. Rarely, but it does. You don't throw out computers as a whole because of this phenomena, do you? You just assess the risk and determine if the scenario you care about sits above or below the threshold.


A bit flip is a rare occurrence in an array typically tens of billions large.

The chance that the flipped bit changes a bit that results in a new valid state and one that does something actually damaging is astronomically small.

Meanwhile LLM errors are common and directly effect the result.


My point is that your confidence level depends on your task. There are many tasks for which I'll require ECC. There are other tasks where an LLM is sufficient. Just like there are some tasks where dropped packets aren't a big deal and others where it is absolutely unacceptable.

If you don't understand the tolerance of your scenario, then all this talk about LLM unreliability is wasted. You need to spend time understanding your requirements first.


When’s the last time you personally had a bit flip on you?


You generally cannot know because we don't measure for it? Especially not on personal computers, maybe ECC ram reports this information in some way?

In practice I think it happens often enough, and I remember a blackhat conference talk from around a decade ago where the hacker squatted typoed variants of the domain of a popular facebook game, and caught requests from real end users. Basing his attack on the random chance of bitflips during dns lookups.

Related, but not the video I was referring to

https://news.ycombinator.com/item?id=5446854


Several large companies could benefit from ignoring GenAI. Unfortunately, "benefit" would only mean "save money and produce better products for customers" instead of "make stock price go up".

Instead, all of these companies are effectively forced to play hype ball.


No, it’s not at all.

This is all getting so tiresome.


All current LLMs openly make simple mistakes that are completely incompatible with true "reasoning" (in the sense any human would have used that term years ago).

I feel like I'm taking crazy pills sometimes.


If you showed the raw output of, say, QwQ-32 to any engineer from 10 years ago, I suspect they would be astonished to hear that this doesn't count as "true reasoning".


Genuine question: what does "reasoning" mean to you?


How do you assess how true one's reasoning is?


Without looking it up (honor system), please describe what you think The Selfish Gene is about.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: