Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

StackOverflow has always been quite open that they're primarily building a dataset for SEO, rather than being a user-centered website. So I don't feel this deal changed much. SO users are still serfs building them a dataset for sale, only the buyer has changed.

LLMs are faster and infinitely more patient than interaction with StackOverflow, so I don't expect SO to survive for long. They're in crisis regardless whether they sell to OpenAI or not, so they may as well get something out of it before they're decimated.


I think they're in crisis because they sold out there community not because LLMs are better. As a developer, if you offer me StackOverflow vs ChatGPT, I'd take StackOverflow any day of the week 100x over.


I'm in the opposite boat. Going through Stackoverflow answers has become quite a chore.

For simple things GPT gives me the correct answer most of the time. And even when it's won't it's quicker to discern it is wrong than trying to parse a given SO page.

Of course I still use SO for more complex questions.

As a rule, if I can quickly find the answer via SO, then chances are GPT will give me the answer more rapidly.


Respectfully, how would you know if you never use ChatGPT?


I said I don't use it. I didn't say I've never used it. In my experience browsing SO is way easier, more accurate, more precise, more controllable, navigable, and ... gives attribution.


For some reason , but a lot of of the answers here seem to care more about "but tell em /I/ solved it" re: attribution rather than helping the user. Somewhat egoist or some such? ( and I don't mean it as an aggressive tone, just ESL so don't know how to say it othrewise)

If I license something as MIT, I personally don't care who uses it for what purpose, hell I don't even care generally that they attribute me. I put it out for people to use. But maybe that's just me.


you spend more time on SO than me. without looking, can you name three stack overflow contributors? I can't.


I was offered a job a few years ago by someone who saw my Stack Overflow answers, does that count? I don't see something like this happening with ChatGPT.


I can do two, Jon Skeet (C#) and S. Lott (Python) are names I remember for providing great answers.


Yes.


>As a developer, if you offer me StackOverflow vs ChatGPT, I'd take StackOverflow any day of the week 100x over.

Really? Hm, I wouldn't. I can use nuance and clarify my answers and have a respectable back and forth (GPT-4 doesn't call me names when I mess up or say something dumb) and arrive at an answer.


> GPT-4 doesn't call me names when I mess up or say something dumb

I’ve heard this accusation a lot, but I don’t think I’ve ever seen it happen. People call you names on Stack Overflow? Where?


>Where?

-50

Marking duplicate. "You should attempt searching before asking such obvious questions."

This question has already been answered here: < https://news.ycombinator.com/item?id=20861356 >

Closed 3 seconds ago.

----

or some such ;) You may not come across it personally, but that doesn't mean it doesn't happen. SO is successful as a QA platform(or was anyway) despite this shortcoming, not because it is a feature and it doesn't happen. If a lot of people are talking about the same thing, maybe people should at least pay cursory attention to the issue rather than "No, it doesn't happen" (Not aimed at you, but there are absolutely comments like this every time this gets bought up.)


You linked to a discussion of about a hundred comments. I skimmed it but didn’t see name calling. Can you be more specific?


Are you sure that's not an X vs Y problem???


I actually have no idea what you mean. can you clarify pls?


It's a common non-answer on stack overflow.

https://meta.stackexchange.com/questions/66377/what-is-the-x...


lol that makes sense, thanks.


And I'd take ChatGPT any day of the week 1000x over. That doesn't mean anything.


> SO users are still serfs building them a dataset for sale

That is a very negative spin.

Users get access to other people's answers for free. They get that free service and are required to contribute nothing. Those that do contribute do it to help other users. S.O. isn't doing anything bad. They're providing a free service where everyone wins. Users get answers. Answerers get to help other humans at scale. S.O. makes a little money.

As for the dataset, it's been available under CC-BY-SA for years. The entire database is backed up and made available here for free every month.

https://archive.org/details/stackexchange

There are even free tools to query it here

https://data.stackexchange.com/


Why a company makes money on someone’s free work? This is obviously not okay. We have even more egregious examples but this is certainly one of them.


The company is paying the people working by providing a free service.

It's like youtube. Youtube provides a free hosting of your videos. In exchange they monetize them. You're free to host them on your own servers. That will likely cost you way more than putting them on youtube. So you're getting something from them. You're also getting their advertising service to monetize your videos. You could do it yourself, hire a bunch of people and try to get companies to put ads on your self-hosted videos. Again, unless you're wildly successful it's unlikely you'll be able to do that and make a profit. So, youtube is effectively paying you.

Same with Stack Overflow. They're providing the servers, the bandwidth, etc. It costs them $. They're providing that service to you.


> StackOverflow has always been quite open that they're primarily building a dataset for SEO

Do you have a source / more details about this? What good is SO's content for SEO?


I find such laments annoying, because they're full of obvious platitudes. It's easy to sound smart quoting Einstein and Dijkstra. It's cheap to make generalizations, and point fingers at complex solutions when having both the benefit of hindsight, and ignorance about their true requirements.

"as simple as possible, but not simpler" is always right. Messy solution? You should have made it simpler. Primitive solution causing problems? You weren't supposed to make it too simple. Why didn't you think about making it just perfect?

In reality, it's very hard to even get people to agree what is simple, when solutions have various trade-offs. Maybe it is easier to focus on maintaining one complex database, than to have 3 "simple" ones, and triple admin work, and eventually end up having to sync them or implement distributed transactions.

Something simple to implement now may cause complex problems later. A simple off-the-shelf solution that doesn't fully solve the problem will breed complex workarounds for the unsupported cases, and eventually add complexity of migrating to something adequate. If you didn't correctly predict how a solution will fit all your requirements, you should have simply listened to Einstein.

All the advice to "just" do something "simple" is blissfully unaware that these solutions are not panacea, and it's rarely a plain choice between complex vs simple. Projects have constraints - they may need to work with existing messy systems, inconsistent legal requirements, or changing business requirements. They may prioritize time to market, or skills they can hire for. And there's brutal economics: maybe annual report export is a Rube-Goldberg machine, but it's done once a year, and a rewrite wouldn't pay for itself in 50 years.

The discussion about complexity rarely acknowledges that projects and their requirements grow, so something perfectly simple now may become complex later, in a perfectly rational way, not due to incompetence or malice. Storing data in a plain text file may be beautifully simple in the beginning, and become a bad NIH database later. But starting with a database for 3 rows of data would be overcomplicating things too. And there's cost to refactoring, so always using the ideal solution is not that simple either.


You and I would think the platitudes are obvious but I've been at tables with people where stating those made people blink with that unnamed dread that hits you when you realize you haven't understood something super simple for a very long time and are just now getting it for the first time. That happened... a number of times in my life and career.

Truth is, much less things are as obvious as you and I think.

> "as simple as possible, but not simpler" is always right. Messy solution? You should have made it simpler. Primitive solution causing problems? You weren't supposed to make it too simple. Why didn't you think about making it just perfect?

Nah, that's obviously (heh) a non-sequitur; iteration always beats planning. We know it by practice.

> In reality, it's very hard to even get people to agree what is simple, when solutions have various trade-offs.

That's why you don't ask for permission, you ask for forgiveness. :) Another law of our profession, if not in many others too.


> That's why you don't ask for permission

I didn't mean agreement as permission, but as having the same judgement. One person may say bash is the simplest, another that Makefile is even better, and third person will say they'll both become a mess, and it's simplest to use Python from the start, and so on.

Reasonable people may disagree where is the line of "but not simpler". Something that is "simple" to one person, is "primitive" to another.

If someone says they have a simple and elegant solution, but it requires their skills, is it really simpler than a "dumb" solution that more people can understand? (e.g. DB vs Excel? C vs JS?).

Everyone may be in agreement that things should be super simple, but there may be a choice between simplifying implementation vs simplifying operations. Or people may disagree about future requirements and argue that a solution that is the simplest now will hit a complex scaling problem later, and the total-lifetime-complexity of the product will be minimized by another solution instead.


Some complexity is inherent to the problem, but most seems to be incidentally introduced by the realities of deployment (non-functional), configuration (functional) and chaos monkeys (users). There is a particular 'breed' of incidental complexity I see with space cadets and front end developers for sure. Complexity is complex lol.


> ignorance about their true requirements.

But why the true requirements are hidden?


Outsiders don't see what happens inside companies.

A solution may need to integrate with in-house frameworks and billing (and you'll die trying to make billing simple).

A feature may get added for a big customer, and be warped by their requirements.

A feature may need to be implemented in a hurry (taking on tech debt) to win a bid/contract.

Conway's law shapes implementations - the right team to implement a thing simply may be busy with something more important, so another team will need to work around them.

In such situations the obvious simplest solution may not be available, and you either do what you can given the constraints, or fail to meet business' requirements.


Oh, I absolutely agree that there are all kinds of constraints when dealing with real world problems, including non technical ones like time constraints and expertise level. But those constraints are never documented and communicated when the resulting complex artifact is released, which causes a lot of unpleasant surprises for users. That's the reason why many engineers avoid such artifacts to mitigate the risks.


I thought your original post was a joke? "Never" is an exaggeration, and overselling Rust like that makes people think it's just hype.


The current state of the art is somewhere in between with 18 minutes per ~3 hours. It even helps to split charging into shorter sessions (2x 9 minutes), because batteries charge fastest when they're about 25% full.

Keep in mind that EVs charge unattended, so you only spend a minute plugging in, and can leave to get a coffee, etc.


How 'guaranteed' is that rate? I don't keep up with it like I probably should, but seem to often read that some chargers are outdated, and sometimes you have to 'share' if somebody else is charging nearby?


In the EU, there are Ionity and Fastned networks that can guarantee their chargers will be fast enough for this (>=250kW), and together they have a pretty decent coverage along major highways.

There are setups that have their max rated power per dispenser (“pump”), and halve it if two cars are plugged in to the same dispenser at the same time. Good chargers can do 300kW. If that splits to 150kW it’s not too bad - maybe 5 min slower, rather than double. That’s because the max speed the car can take is a curve, and that only flattens the peak.

However, for the 18-min charging the biggest gotcha is the temperature. In Hyundai/Kia it requires the battery to be at 20-25°C. That’s easy in the summer. In the winter the charging speed can drop as low as 80kW.


Last time we had to fast charge on the way to Yosemite, we had lunch at a strategically parked taco truck in the same lot (they even had a picnic table).

We would have walked to a nearby restaurant if needed.


They easily do. Discharge rates are typically higher than charge rates. For stationary batteries it's all easier due to being able to have larger, more parallel batteries, and better cooling when weight is not a concern.

Battery-backed charging stations are already common, because it allows use of cheaper grid interconnection, and use of cheaper off-peak or renewable energy.


Rust is already at a done/stable stage, and won’t make major changes even if it theoretically could.

Change proposals that cause churn are regularly shot down.


The article has a warped timeline of WebP, with incorrect speculation, and missing key points.

WebP has been created in the rush of wild optimism after Google has freed the VP8 video format (WebM). VP8 has been strategically important for them (and YouTube), because before then Web video has been at mercy of Flash, Silverlight, and threatened by H.264 patent royalties.

The WebP format has been rushed. It didn't go through a usual standards process. The VP8 codec turned out to be not so great, especially poor for still images. VP8 has lost to H.264, and meanwhile Cisco has found a loophole to sponsor H.264 royalties for all browsers.

Other vendors have rejected WebP, partly because it was a non-standard Google's own thing (uncool move at a time when WHATWG was at its peak), but mainly because it just wasn't good enough, compared to optimized JPEG (Mozilla created MozJPEG to prove the point). Their bar for adoption is very high, since they're worried about maintaining things forever, bug-compatibility issues from a single implementation, increased code size, and attack surface.

Mozilla and Safari have been resisting adoption of WebP for about 10 years. They've relented not because WebP got a "stable release" (total nonsense in the article), but because their bug trackers kept getting reports of Chrome-only websites and buggy HTTP content-negotiation that kept serving them "broken" images, to the point it started hurting their compatibility and costing them users.

----

With AVIF we've had the repeat of the video rush. Web has been once again been threatened by commercial H.265, with even messier and more expensive licensing, and no Cisco loophole this time. Browser vendors have banded around and adopted AV1 format ASAP to prevent dominance of H.265 on the Web.

And just like WebP has been riding adoption of VP8, AVIF was riding on adoption of AV1. And once again, a video codec turned ot to be suboptimal for still images. Browsers didn't really care about adopting any image format. They cared about adoption of AV1 video, and AVIF got a pass only because it was almost for "free" (it's basically a 1-frame video file).

(BTW, AV1 is based on VP10 + contributions from other vendors. This kinda makes it a descendant of WebP.)

So from the perspective of browser vendors not really wanting any new image format, and getting one anyway, and AVIF being good enough to not need an urgent replacement, there's simply no appetite left for adopting yet another format. Browser vendors still don't like adding more code, and are still afraid of compatibility issues and attack surface.

The conspiracies about money and power are hilarious.


There’s magic in the Box: ability to partially move content out of it, where any other type with Drop couldn’t handle it. You can implement traits for Box<Foo> even when Othertype<Foo> wouldn’t be allowed.

But noalias is not very special for Rust. &mut and & have a bunch of limitations too.

But there’s no need to give up on them, because Rust has the UnsafeCell wrapper type for doing crimes with pointers. It selectively disables noalias, thread safety, etc. Instead of weakening guarantees of Box in general for all types, just insert UnsafeCell where you need to be clever with pointers.


It requires supporting higher-kinded types, and Rust was reluctant to add them (although it’s slowly getting there with higher kinded lifetimes and generic associated types).


Tangential note: nobody is required to accept the terms GPL, so inclusion of some GPL code doesn't automatically force projects to release their source code.

Without accepting the GPL, use of GPL code is generally a copyright violation, but it can be litigated as software piracy, which may have various outcomes based on how serious that is.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: