Hacker Newsnew | past | comments | ask | show | jobs | submit | more arka2147483647's commentslogin

The limiting factor is not compute power, but the time and understanding of a random dev somewhere.

Time also is not well understood by most programmers. Most just seem to convert it to epoch and pretend that it is continuous.


Windows 7, released 2009. Thats 15 years.

I would say that is a long stretch of support by any measure.


But why? Why is it acceptable to consider OSes 15 years old ancient, when that logic never applied to almost any tool in the history of mankind, and still doesn't apply to any other category except software?

In my home all the machines are 20+ years old and work perfectly fine (oven, washing machines, vacuum cleaners, irons...) not to mention things that don't need electricity, which are 30 or sometimes more than 80 years old (furniture but also cookware, etc.)

A bike that's 20 years old feels ok and modern; a bike starts to really look ancient at maybe 50 years old? We can still drive cars made in the 80s and 90s (and they're often better than newer ones).

Windows 7 works fine, why should it go to waste, and the machine it run on, and why should people be forced to upgrade and change or abandon all the little customisations they have set up that makes their lives easier? Don't we have better things to do with our time than relearn a new way to do the exact same thing?

I understand the motivation of vendors to try and sell new things; what I don't get is why we put up with it.


No-one is forcing you to stop using Windows 7 if you don't want to. It's just no longer suitable for this specific use case(running the latest steam client).

If you apply the same test to other things you mentioned then often they are the same. They will continue doing the same job if you want them to, but they won't meet new requirements placed upon them, and the only realistic option is complete replacement.

For instance, cars over 20 years old are no longer allowed in central London because they lack emissions filters for particulates which are hazardous to health in high concentrations. These cars are considered ancient and it's acceptable to ban them from one of the most important cities in the world. They're not better, and in this specific case you literally can't still drive it(if you live London).

My 20 year old DVD player cannot play BluRay discs. It's considered ancient, and it was acceptable to create a new incompatible standard. If I want to play the latest discs I'll have to buy a new one.

My dwelling has no gas supply. I have an induction cooker, which heats far faster than either gas or thermal electric whilst also avoiding the dangers of flame and uncovered heating elements. 80 year old pans would not work with it. They're ancient.


I wasn't dealing with Bluetooth devices 80 years ago.

For a ton of people the demands of what they're expecting from their software change pretty radically in 15 years.

These days I'm needing more modern hardware support than what Windows 7 offers. I'm managing Bluetooth devices and swapping audio devices much more often. I'm far more connected than 15 years ago. The security threat landscape is pretty different today than 15 years ago. I'm using much larger screens (and vastly different aspect ratios) than 15 years ago and prefer to use the built-in snap layouts rather than just the basic left/right/center in 7. Dozens of more things that just end up making the experience of going back to 7 pretty painful in the end.


The environment in which an OS runs becomes increasingly adversarial with time.


Initial release date is less interesting than when its last version/patch was released, which https://en.wikipedia.org/wiki/Windows_7 puts at

> Service Pack 1 with January 2023 monthly update rollup (6.1.7601.26321) / February 8, 2023; 21 months ago

which isn't super recent but is pretty close.


I understand but 15 years old PC are well serviceable (even if not up to current specs), so why destroy them because they're unsupported ? We will need to be more conservative of resources and these PC can be used (think running starcraft 2 which I enjoy or other older nice PC games)


Why do people write these kinds of "answers" that the model gives. It's not like the model knows why it's doing anything.


Not understanding how LLMs work.


Ah such condescension! I was careful not to provide an opinion, but replicate the answer as is. The intent was not to treat that answer as fact; but I thought that response was pretty revealing and in fact supported the parent comment on LLMs having been trained on copyrighted materials. The response chatgpt provided was that OpenAI models are designed to create original content "without directly referencing copyrighted characters". If it was creating original content it needn't have referred to the constraint with respect to avoiding directly referencing the copyrighted characters.


> Everything you do in Python involves chasing down pointers and method tables, looking up strings in dictionaries, and that sort of thing.

One of the things that a Cpp Compiler can do is De-Virtualize Function Calls. That is; solve those function pointer tables at Compile Time. Why could not a good compiler for Python do the same?


It’s a lot easier in C++ or Java. In C++, you make the virtual calls to some base class, and all you need to demonstrate is that only one concrete class is being used at that place. For example, you might see a List<T> in Java, and then the JIT compiler figures out that the field is always assigned = new ArrayList<T>(), which allows it to devirtualize.

In Python, the types are way more general. Basically, every method call is being made to “object”. Every field has a value of type “object”. This makes it much more difficult for the compiler to devirtualize anything. It might be much harder to track which code assigns values to a particular field, because objects can easily escape to obscure parts of your codebase and be modified without knowing their type at all.

This happens even if you write very boring, Java-like code in Python.


C++ has an explicit compilation phase after which things like classes remain stable.

Python only has a byte-compilation phase that turns the parse tree into executable format. Everything past that, including the creation if classes, imports, etc is runtime. You can pick a class and patch it. You can create classes and functions at runtime; in fact, this happens all the time, and not only for lambdas, but this is how decorators work.

A JIT compiler could detect actual usage patterns and replace the code with more efficient versions, until a counter-example is found, and the original "de-optimized" code is run. This is how JavaScript JITs generally work.


> I had to implement my own OAuth 2.0 library, and once I had grokked all the things the author mentions, actually using the OAuth 2.0 endpoints wasn't hard at all.

Usually the point of an api is to abstract away the nitty gritty details, so you can do something more complex with a few simple calls and particial understanding of the problem domain. Seems like oauth fails at that.

For example, nobody expects you to understand filesystem implementations to just open and read a file.


> you can do something more complex with a few simple calls and particial understanding of the problem domain.

Everything is a trade off.

You can find plenty of APIs which work this way. They tend to be expensive in one way or another and so they are often limited in use by cost or by time. If that expense is justifiable, perhaps something you don't call often, and which can be idempotent enough to not have to worry about error states that much, then you are lucky and will be happy with the result.

> Seems like oauth fails at that.

It's in the name. _auth_. Now you have a new trade off. Now between the triangle of secure, cheap, and easy to use we made the most obvious sacrifice of easy to use.

> nobody expects you to understand filesystem implementations to just open and read a file.

How long of a pathname can you have? Are there any OS special characters you can't use? How many directories deep can you go? Is that actually a file and can EAGAIN be generated? Are interrupts enabled? Did you actually open a directory? What _should_ happen when you try to just read a directory as a file?


The problem for systems/low-level programming is that you want high-performance, and manual-control of resource management. As such, automated resource management can often look like a problem, instead of a feature. I think there is a deeper disconnect here between the language designers and the programmers in the trenches.


Yes if I read the article right, every object is being allocated on the heap. That is a no-go for systems programming as far as I'm concerned.


I read it the same way you did. But, I’d be really surprised if there was no stack allocation in the language, given the author’s experience.


In an arena allocator, the stack can be just a special case arena that gets discarded automatically for you on function exit.


My gut feeling agrees with you but I would really like more detailed reasons why this is the case. Is memory fragmentation that big of an issue? Are heap allocations more expensive somehow (even if memory is not fragmented yet)? Is there something else? Does re-arranging memory in the heap makes performance unpredictable like GC languages?


Memory allocation is slow and undeterministic in perf. Some allocations also require a global lock on the system level. It’s also a point of failure if the allocation doesn’t succeed, so there’s an extra check somewhere. Furthermore if every object is a pointer you get indirection overhead (even though small but existent). Deallocation as well incurs an overhead. Without a compacting gc you run into memory fragmentation which further aggravate the issue. All of this overhead can be felt in tight loops.


Due to the quick fit algorithm, fragmentation is no longer an issue for memory allocators. Heap allocations are still a bit slower than stack allocations since you need some way to release memory. Stack allocations are released at virtually zero cost (one assembly instruction). Hence sophisticated compilers perform escape analysis to convert heap allocations into cheaper stack allocations. But escape analysis, like all program analysis, is conservative and wont convert as many allocations as a human programmer could.

However, in the grand scheme of things heap vs stack allocation is minuscule. Many other factors are much more important for performance.


For one thing, allocating every object on the heap leads to a lot of cache misses because the data you're working with is not contiguous in memory. It may also make it harder for the CPU to do speculative fetches from memory because it needs to resolve the value of a pointer before it knows where to fetch data. With the stack, the address is much more obvious since it's all constant offsets relative to the frame pointer.

Also, heap allocation is unpredictable. It is more likely to cause unexpected page faults or thread congestion (multiple threads often share the same heap so they need to synchronize access to memory book-keeping structures). Especially when it comes to kernel drivers, a page fault can lead to a deadlock, infinite recursion, or timeouts.

I'm not saying heap is always bad, not even that it's bad most of the time. But if a language doesn't at least give you the _option_ of having objects live on the stack, I wouldn't consider it a serious systems programming language.


There is no inherent difference. It's all memory.

That said, as a sibling already pointed out, it's standard to control stack allocation with a single counter. It's kind of standard to control heap allocation with an index and a lot of book-keeping.

But you are allowed to optimize the heap until there's no difference.


Stuff like memory pools, arena and slab allocators have been in widespread use in C/C++ systems programming for decades. It looks like designers of hip languages are reinventing that stuff in compilers that try to protect you from yourself.


To add "verifiable resource correctness/safety" to a language, all the non-trivial coordination algorithms for memory, threads, exceptions, ..., etc., need to be reinvented.


The app-switcher in newer IpadOS is an endless confusion to my mother. Somehow she opens multiple copies of web-browser, email, etc, and then cant find anything.

How much I hope I could disable that one feature!

Hiding the Inbox folder in Email is another classic I need to fix monthly.


It might help to disable "Allow Multiple Apps" and "Picture in Picture" on her iPad. This also disables slide over. Also consider disabling gestures.

iOS 17 offers to disable all of this stuff on initial install if you set up a child account, because kids tend to get confused, especially by slide over. But they should probably offer that sort of thing as an "easy mode" option for all users.


Assume we have a child, and we test him regularly:

- Test 1: First he can just draw squiggles on the math test

- Test 2: Then he can do arithmetic correctly

- Test 3: He fails on the last details on the algebraic calculation.

Now, event though he fails on all tests, any reasonable parent would see that he improving nicely, and would be able to work in his chosen field in a year or so.

Or alternatively, if we talk about AI, we can set the Test as a threshold, and we see the results are continuously trending upwards, and we can expect the curve to breach the threshold in the future.

That is; measuring improvement, instead of pass/fail, allows one to predict when we might be able to use the AI for something.


With AI you can do millions of tests. Some tests are easy by chance (eg. "Please multiply this list of numbers by zero"). Some tests are correct by chance alone, easy or hard.

When you actually do these millions of tests, I don't think it really matters what the exact success metric is - an AI which is 'closer to correct, but still wrong' on one test will still get more tests correct overall on the dataset of millions of tests.


I imagine the point is not to implement some species of sync primitives currently not supported by Linux, but to implement exactly what windows does, so one does not need to emulate them in userspace.


The point for the contributor is that, yes.

The point that would convince the Linux kernel maintainers to actually accept the patchset, though, would likely be the introduction of generally-useful "species of sync primitives." In such a way that those primitives can be used to solve the "hard part" of WINE's virtualization of NT's versions of those primitives; but which wouldn't be constrained to guarantee an efficient zero-impedance call path between those implementations.


From article:

> Trust is really important. Companies lying about what they do with your privacy is a very serious allegation.

> A society where big companies tell blatant lies about how they are handling our data—and get away with it without consequences—is a very unhealthy society.

> A key role of government is to prevent this from happening. If OpenAI are training on data that they said they wouldn’t train on, or if Facebook are spying on us through our phone’s microphones, they should be hauled in front of regulators and/or sued into the ground.

I find it very difficult to believe that big technology could be held accountable to anything by legal means by regulators. They have too large legal teams, and too well written legal agreements, and eulas.

I find it impossible to believe that I, as a person, could challenge them in any way though legal means.


Given history of how US lawmakers have passing any bills that seriously threaten the might of BigTech, compared to how EU keeps on giving the middle finger to Big Tech, it's a weak argument.

Corporations run America.


No matter how big the company, they still care about a billion dollar fine. That's bad for shareholder value! And billion dollar fines do happen.


Fines are often gamed: lobbying, legal defence.

I would argue big companies care more about losing customers. Сompetitive pressure is more effective than laws and fines. And laws are not without downsides, including becoming barriers to entry into competition.


Billion ain't much nowadays. The big 5 do 100B+/year in net profits. <1% doesn't matter. It's like being fined a $1000, while earning 6 figures. It feels like a pinch.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: