Hacker News new | past | comments | ask | show | jobs | submit | devmor's comments login

If you need an 8gb docker image as part of your local web development stack, that’s a toolchain problem.

One of our vendors publishes a 70GB docker image as their SDK. It's awful.

That is horrendous. I'm assuming it contains some kind of giant dataset in its entirety?

No datasets. Most of the size is just apt packages and tools bundled into the layers. Around 5GB are "useful" things, and another 15GB are a couple of arguably justified tarballs (only one of which is needed).

That's even more infuriating. Just sheer incompetence wasting your valuable space and bandwidth.

I agree with the sentiment of this post in general (annoyance with Wayland being shoved down my throat, despite missing core features of Xorg) but I am rather concerned about Xlibre's future as a project. The README being stuffed with reactionary political dogwhistles is downright weird and doesn't inspire confidence in longevity.


Does anyone else get an error from GitHub when going to those links, or is it just me?

GitHub is having an incident at the moment: https://www.githubstatus.com shows

Update - We are investigating reports of issues with many services impacting segments of customers. We will continue to keep users updated on progress towards mitigation. Jun 17, 2025 - 19:53 UTC


That code of conduct is a huge red flag that this is going nowhere

I have been around long enough (enduring the big swinging dicks) to understand why they are required.

The statements of inclusion in the README when the principal author campaigns against it indicates a dire lack of social skills. What hope is there for this?

I mostly agree with the critisisms of Wayland, I too have had to uninstall it to get what I need, but this all seems worse

Good luck to them, but I have no confidence


The main author is the person in Linus yells at here: https://www.theregister.com/2021/06/11/linus_torvalds_vaccin...

Honestly the belief of the developers don’t matter at the end of the day. The quality of code is the only thing who matters.

And I don’t imply the quality of code is good or not, I have no idea.


Projects of this size cannot be a one man show. So the ability of collaborators to cooperate matters greatly.

Having a controversial reputation in leader is not always a bad thing, look at Theo De Raadt or Linus Totvald. It’s seem to have already attracted contributors, but let’s see how all of this goes in the next’s months.

There are different kinds of "controversial" and degrees of things. While you're correct that both Torvalds and de Raadt have a well-earned reputation for not always being the easiest person to get along with, most of their controversial behaviour centres around flaming people over technical matters, or sometimes organisational matters directly related to the project. Basically: it's not what they're saying, it's how they're saying it.

I've never seen either Torvalds or de Raadt go off on these mega-weird political rants completely unprompted, or inject that sort of thing in the project's README. Never mind going off on rants that smell an awful lot like Nazi apologetics. This is not just a matter of "how they're saying it", there are some real issues with what they're saying as well.

But you know, it's open source. He doesn't need mine or anyone else's permission. More power to him.

But it's really not the same as Torvalds or de Raadt. And it's also not so strange people want to steer clear of it.


From authors commits to x11 and discussions about that, looks like code was not great.

source comment: https://news.ycombinator.com/item?id=44200000


It’s more a criticism of breaking certain things, which is inevitable when your try to refactor something. The matter is in the end if the behavior remain the same when you finish.

But it’s just a supposition at this stage, let’s see on the next’s month if it a complete mess or if it’s the XFree86 to Xorg transition


founders beliefs are garbage

Well it's the code that matters. Not what their beliefs are.

code is also garbage

Well it's the thought that's important.


And then I recall that experiment in which an LLM trained to spew out garbage insecure code started to behave like a garbage insecure edgelord personality too.

Guess "you can't have one without the other".


This is exactly the problem with xorg right now and why the woke crowed wants to kill it. You people new to the community have no idea how actually diverse it is. Yes odds are you haven't been involved since the 90s. Let people think what they want or you should go use Microsoft Google or Apple products and be told how to think and feel. Like the poster said below its about the code not what people believe. Funny how RiserFS wasnt an issue but this day in age it would be.

I disagree with the sentiment and welcome the future. But I do agree the way the author skips over the reason this exists and the problematic nature of their readme is a red flag. Either they agree with the dogwhistles or are being intentionally obtuse to proclaim that it's "only about the code".

Definitely don't see this project having legs or at the very least not advancing very far.


> The README being stuffed with political dogwhistles is downright weird

For reasons I've never understood politics started invading open source about 10 years ago. What's weird is that these political ideas all seem to be highly aligned and this is the first major project that breaks that alignment.

Personally I'd prefer to see the politics, on both sides, disappear forever. It only pollutes the engineering and it fails to be convincing or meaningful in any other context.

> doesn't inspire confidence in longevity.

There are forces trying to kill X11 for their own internal reasons. I think as long as there is a project that is trying to maintain it, it will be successful, political warts and all.


I would guess that you started to be aware of politics about 10 years ago. You're off by at least 30 years or so...

For example, the removal of Jerry Pournelle's free account at MIT because he kept mentioning ARPANET in his column in Byte magazine. Then he accused MIT's sysadmins of being communists who wanted to destroy America's military...

That was 1985. The X project started the year before that.


I think politics will always be a big part of open source, as the nature of open source development is inherently at odds with corporatism and control.

It's particularly the reactionary stuff that concerns me when it comes to projects like this. When someone's motivation is tied to short-lived movements of social energy like that, I don't trust them to have a long term vision or investment in a project.


> short-lived movements of social energy

It's hard to imagine an energy that's apparently existed for 12 years and still going being described as "short-lived." Perhaps it's really just unfamiliar and that's why it seems so concerning?


I don’t recall anyone ranting about “DEI” 12 years ago. I do recall the same rants about 3 or 4 other terms that ultimately resolve to “people that aren’t me are allowed to do things”, though.

> I don’t recall anyone ranting about “DEI” 12 years ago.

Do you typically read the types of publications where that was likely to occur?

> I do recall the same rants about 3 or 4 other terms that ultimately resolve to “people that aren’t me are allowed to do things”, though.

Or perhaps you've just read second hand accounts of the phenomenon without actually confronting it directly? Which is what I presume given that you've come to such a self serving conclusion about it.


> the nature of open source development is inherently at odds with corporatism

No, you're thinking of “Free Software” — “Open Source” was explicitly pro-corporate from the moment the term was coined. OSI themselves will tell you that “open source” as we know it was a product of AOL's desire to get people to work for them for free: https://opensource.org/history

“The [February 3rd, 1998] conferees believed the pragmatic, business-case grounds that had motivated Netscape to release their code illustrated a valuable way to engage with potential software users and developers, and convince them to create and improve source code by participating in an engaged community. The conferees also believed that it would be useful to have a single label that identified this approach and distinguished it from the philosophically- and politically-focused label ‘free software.’”


Open source is inherently political. I think it's anti-capitalist and I remember how pro-capitalist people called it communist back in the 90s/00s. Saying open source isn't political is almost like saying Star Trek isn't political.

No it isn't. I have contributed to, and used, many open source projects without it ever being political. All I did was share work I created because it might help someone out, or use a tool I found useful.

Some people choose to make open source political and that's their right, but it isn't inherently political. That is a choice one makes.


> All I did was share work

You may not have political opinions but you took political action....


Yeah, it baffles me how people don't realize that.

I understand

Politics is "dirty", understandably. But people are mostly decent.

It is the evil powerful hideous people that make politics dirty and life bad

So acting as a decent considerate, loving even, person becomes a political act

Not good, things could be better, but it is what it is


"everything is political" -> Communism

> very narrow perspective, seemingly not realizing problems are much wider and not limited to one use case

Ironically, this is the complaint many of us have about the development of Wayland.


I'd say it's the result of the use cases being so broad, that some get more focus than others. There were obviously pain points that only gradually got better. But it doesn't mean Wayland isn't suitable for addressing those scopes. Someone has to do the work though rather than complain.

In the past there were some problems with protocols not being accepted fast enough, those issues were more organizational than technical. But that seems to have been finally resolved not so long ago and a bunch of really useful protocols were accepted recently.


This reminded me of a feature request I dealt with at an employer, while working on backoffice software for a support team. The software loaded a list of all current customers on the main index page - this was fine in the early days, but as the company grew, it ended up taking nearly a whole minute before the page was responsive. This sucked.

So I was tasked with fixing the issue. Instead of loading the whole list, I established a paginated endpoint and a search endpoint. The page now loaded in less than a second, and searches of customer data loaded in a couple seconds. The users hated it.

Their previous way of handling the work was to just keep the index of all customers open in a browser tab all day, Ctrl+F the page for an instant result and open the link to the customer details in a new tab as needed. My upgrades made the index page load faster, but effectively made the users wait seconds every single time for a response that used to be instant at the cost of a one time per day long wait.

There's a few different lessons to take from this about intent and design, user feedback, etc. but the one that really applies here is that sometimes it's just more friendly to let the user have all the data they need and allow them to interact with it "offline".


There's no reason you can't have the cake and eat it too. If google can index the entire web and have search results and AI results for you in an instant, then you can give users instant customer search for a mid sized corp. A search bar that actually worked fast would have done the same as their Ctrl f workflow.

Of course if the system is a total mess then it might have been a lot of work, but what you describe is really more of a skill issue than a technical limitation.


I would not consider Google search, nor its AI overview an example of a good system.

It is an effective product, in so far as it generates revenue, but it is not an example I would use to describe a system that provides a good and useful experience to its end users - which was and generally is my goal when designing software.


I didn't say it was good, I said it was fast. Those two things are often correlated though, the app described above certainly doesn't fit my definition of good.

Computers can do unfathomable amounts of computation in the blink of an eye. If your app takes seconds or longer to do stuff it's probably because it's ass. Nearly all common operations should be measured in nano or milliseconds. If they're slow it's probably the dev's fault.


As your career goes on, you will likely eventually work on large production systems that are not idealized greenfield deployments - at this time, you will have to come to terms with the constraints of Network and I/O bounds, and your dreams of nano and milliseconds will be shattered.

In particular, I welcome you to experience the lovely word of corporate VPNs, whose maintainers and developers seem to have latency expectations that have not changed in 30 years.


I'm currently working on one of those wonderful greenfield projects where the monkeys who put it together made these mistakes I describe and everything is slow. Website isn't even in production yet and everything takes 5+ seconds to load for no reason. Because they just store the data the way they got it which is shit, instead of shaping it into something sensible that allows efficient queries.

And yes there's a corporate vpn and fire walls and vents and subnets and whatever else, but when I create a feature it doesn't take 5 seconds to load. It takes milliseconds to load. Because there's nothing in our environment that justifies this bullshit, the guys who built this app just suck. And I'm going to fix it like I always do.

I have also worked on large production systems where I've fixed lots of performance issues. Often I can see them just by reading the code, I'll find some code that looks ass then I'll run it and sure enough it's slow. So I fix it. It doesn't take a profiler or micro optimization, it just takes some basic understanding of what we're doing.

Some times slow code is justified, some times there's just a lot of processing to be done or a lot of network requests to send or something. But most of the time it's just devs who don't understand fundamentals.


I’m sure your super awesome programming skills that fix infrastructure and eliminate network I/O are very cool!

I'm not claiming to eliminate network transfer times. Sending a web request takes some time. But it doesn't take seconds, it takes anywhere from a few to a few hundred milliseconds, given that the request size is relatively small. If you're sending megabytes or gigabytes then obviously it takes longer.

So if you have a website and a backend, the most basic request will be fast. Make an endpoint that just returns hello world, a request to this endpoint will generally take a number of milliseconds. Maybe 10, maybe a few hundred, something like that assuming a decent connection. That's the round trip time, including overhead from protocols and auth etc.

Now you know the best case scenario - when you create a real endpoint that actually does something useful it will be slower. How much slower depends on what you are doing and how. That is what I am talking about. If you send 50 requests from your backend to various other services then obviously it's going to take a lot more time, so we want to avoid sending a lot of requests - especially in series, where the first request has to complete before the next one etc.

We also want to avoid doing a lot of heavy processing. You can do a large amount of processing with no discernible performance impact, but at some point and especially with bad algorithms this processing time can explode and make your system slow. For example I once looked at some code to generate a report that took 25 minutes to run. It was getting two lists of objects and combining them by iterating through the first one and linear searching the other for a match by id. The time complexity of this is O(n^2). I turned the other list into a dictionary which allows O(1) lookup, eliminating the O(n) linear search, making the overall time complexity O(n) and the report generated in less than 5 minutes. Still painfully slow by my standards, and I'm sure I could have optimized it further if I didn't have more important tasks to work on, but it's a pretty good improvement from a few minutes of work.

Another common culprit is just sending too much data. I've seen websites where a request returns huge Json documents of multiple megabytes, then uses a tiny fraction of the data. By changing the system so that the website only fetches the data it needs you can reduce the request time from seconds to milliseconds.

I hope this gives you a better idea of what I'm saying.


I don't really know how else to lead you to understand that when you query a data source you do not have control over, through a network you do not have control over, you cannot make the interface work faster than the response from that data source. No matter how good your programming skills are, your endpoint cannot return data faster than it retrieves it.

I completely understand that. You just haven't mentioned it until now.

You could also look for inefficiencies in the search. Maybe the query is inefficient, maybe you can make use of database functionality such as full-text search and/or indexes etc. If you don't have access to make those changes to the db, you can cache the data in your backend memory, your own DB, Redis or whatever you prefer, so your app can be nice and snappy regardless of how ass your dependency is.


Not just unreadable - actively malicious. Every now and then I run into a website with a bunch of custom javascript that doesn't function in Firefox, and I'll have to open up Edge or Chromium (depending on what computer I'm using). Every time this happens, I'm immediately accosted with "features" or advertisements attempting to hijack my experience to sell me something or steal from me.

As long as Firefox supports the tools that protect me from the hostile behavior of websites, it will remain my browser of choice.


Probably interesting to note that this is almost always true of weighted randomness.

If you have something that you consider to be over 50% towards your desired result, reducing the space of the result has a higher chance of removing the negative factor than the positive.

In contrast, any case that the algorithm is less than 100% capable of producing the positive factor, adding on to the result could always increase the negative factor more than the positive, given a finite time constraint (aka any reasonable non-theoretical application).


Your question is answered by the study abstract.

> Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.


But it's not that they "underperformed" at life in general - they underperformed when assessed on various aspects of the task that they weren't practicing. To me it's as if they ran a trial where one group played basketball, while another were acting as referees - of course that when tested on ball control, those who were dribbling and throwing would do better, but it tells us nothing about how those acting as referees performed at their thing.

I see what you’re getting at now. I agree I’d like to see a more general trial that measures general changes in problem solving ability after a test group is set at using LLMs for a specific problem solving task vs a control group not using them.

Are there such tests? Sounds like IQ tests to me, which is a quite indirect measurement.

I am not an expert in the subject but I believe that motor neurons retain memory, even those not located inside the brain. They may be subject to different constraints than other neurons.

You’re missing some deeply important context there, which is that those measurements are for outdoor atmospheric CO2 only.

Average indoor air quality ranges from 400-1000 ppm CO2, with adverse mental effects starting to appear close to 2000 ppm.

In that context, you can see why a 50 ppm difference is marginal. This is why asking an LLM is not generally a great idea for understanding something - you need to follow it up with more research.


> adverse mental effects starting to appear close to 2000 ppm.

cognition is harmed starting at 1000ppm (https://pmc.ncbi.nlm.nih.gov/articles/PMC3548274/)


I'm having a hard time parsing the data in this paper - is it showing that task focus increases at 1000ppm compared to 600ppm CO2 exposure?

One could say, for instance… A pattern matching algorithm detects when patterns match.

That's not what's going on here? The algorithms aren't being given any pattern of "being evaluated" / "not being evaluated", as far as I can tell. They're doing it zero-shot.

Put it another way: Why is this distinction important? We use the word "knowing" with humans. But one could also argue that humans are pattern-matchers! Why, specifically, wouldn't "knowing" apply to LLMs? What are the minimal changes one could make to existing LLM systems such that you'd be happy if the word "knowing" was applied to them?


Not to be snarky but “as far as I can tell” is the rub isn’t it?

LLMs are better at matching patterns than we are in some cases. That’s why we made them!

> But one could also argue that humans are pattern-matchers!

No, one could not unless they were being disingenuous.


What about animals knowing? E.g. dog knows how to X or its name. Are these things fine to say?

>Not to be snarky but “as far as I can tell” is the rub isn’t it?

From skimming the paper, I don't believe they're doing in-context learning, which would be the obvious interpretation of "pattern matching". That's what I meant to communicate.

>No, one could not unless they were being disingenuous.

I think it is just about as disingenuous as labeling LLMs as pattern-matchers. I don't see why you would consider the one claim to be disingenuous, but not the other.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: