What makes Mac great is/was the ecosystem of 3rd party tools with great UI and features. Apple used to be good enough at writing basic 1st-party apps that would mostly just disappear into the background and let you do your thing, but they are getting increasingly "louder" which... may become a problem.
I still agree that second hand Thinkpads are ridiculously better in terms of price/quality ratio, and also more environmentally sustainable.
I have to admit, every time I looked into screenshots of earlier Macs, like the 68K and PPC ones, I felt I loved the UI and such. I even bought a PPC laptop (I think it's a maxed out iBook with 1.5GB of RAM) to tinker with PPC assembly.
But I could be wrong. Maybe the earlier Macs didn't have great software either -- but at least the UI is better.
Having lived through those days... well, it was good for the time, mostly. MacOS was definitely better than Windows 3.11, and a lot more whimsical, both the OS and Mac software in general, which I miss. The featureset, though, was limited. Managing extensions was clunky, and until MacOS 10, applications had a fixed amount of RAM they could use, which could be set by the user, but which was allocated at program start. It was also shared memory, like Windows 3.11 and to some extent Windows 95/98, so one program could, and routinely did, take down the whole OS. With Windows NT (not much adopted by consumers, to be fair), this did not happen. Windows NT and 2000 were definitely better than MacOS, arguably even UI-wise.
I do miss window shading from MacOS 8 or 9, though. I think a whimsical skin for MacOS would be nice, too. The system error bomb icon was classic, the sad-Mac boot-failure icon was at least consolation. Now everything is cold and professional, but at least it stays out of my way and looks decent.
Interesting. I thought the new MacOS was unix-y? But I never owned a Mac back then so not sure. For me Windows 2000 is the pinnacle. It doesn't crash (often), supports most of the games I played then, and I like the UI design.
OS X and later are derived from NeXT Step, which makes it derived from BSD. And thus, UNIX-y. Macintosh system software versions less than 10 are Apple original development. The earliest versions were designed for hardware with only 512 or 128 kilobytes of RAM and without physical support for protected memory.
Unfortunately, backwards-compatibility requirements prevented the addition of process memory isolation before OS X. One result of not having this protection was that an application with a memory bug could overwrite memory location zero(the beginning of a critical OS-only area), or any other memory area, and then all bets were off. Some third-party utilities, such as Optimem RAMCharger, gave partial protection from this by using the processor's protected mode, and also removed the occasional need for users to manually set the amount of memory allocated to a program. However, many programs were not compatible with these utilities.
These are mostly employed positions, where employees have procedures to negotiate their salary with the employer (which might be the government itself).
Most artists otoh are self-employed, and the government decided that the country at large would benefit from giving some of them economic support. You can argue with the modalities but the reasoning does not seem that opaque to me.
If "a cosmic ray could mess with your program counter, so you must model your program as if every statement may be followed by a random GOTO" sounds like a realistic scenario software verification should address, you will never be able to verify anything ever.
I agree, you definitely won't be able to verify your software under that assumption; you need some hardware to handle it, such as watchdog timers (when just crashing and restarting is acceptable) and duplex processors like some Cortex-R chips. Or TMR.
Linus's law [1]? When it comes to compilers for mainstream languages, the userbases are so large that they will explore a surprisingly large portion of the compiler's state space.
But definitely, better engineering and QA practices must also help here.
> - Remove it if it hadn’t posted in the last few years. Some people blog extremely irregularly, but the likelihood is that most blogs that are 5+ years old aren’t coming back.
This I don't really understand. Following inactive feeds via RSS comes effectively at no cost for you. How does removing them improve the experience?
It would be cool if you could somehow be notified when the ownership of a domain changes so you could take that action when it makes sense instead of preemptively killing subscriptions.
It can be preventative against spam for when old domains expire/get sold and/or old blog service passwords get hacked.
I think about doing that sort of proactive cleanup sometimes. There's nothing quite as disappointing as seeing an old friend's blog show a new post for the first time in years only for it to be some spammer that just hacked their old password or some expired domain squatter saw RSS logs and decided to sell advertising on it or a once major blog host was sold to a Russian oligarch who purged the user database so more Russians could have good usernames (LiveJournal, lol).
Patreon and Spotify already implement subscription-based podcasts, and I am positive they use RSS/Atom under the hood. So the tech is already out there, you just need to turn it into a self-hosted solution.
Indeed, Patreon has private feeds for patrons for exclusive content. That's a decent solution but it's platform-specific, which is both a bad thing (not easily used elsewhere) and good (backwards compatible with good old RSS).
Did not know Patreon's tech was lock-in. I subscribe to a podcast on Spotify and they give you a private URL that you can feed to any app. If you are worried about malicious customers sharing the URL you may likely enable some form of rate limiting (e.g. the server may only serve up to x MiB/month on this URL)
I recently started to look at this the other way around. A functional paradigm allows to describe very precisely what a function does through its type. In imperative languages, OTOH, the type signature of a function (which really should be called a procedure) only gives you little information to what happens when you call it, due to mutable state, side effects, etc.
I agree that a) rejecting a paper that has been recommended for acceptance by _all_ reviewers (something that routinely happens in, say, NeurIPS) is nonsense. However, in-person conferences have physical limits. In the case of, again, NeurIPS, you may get accepted and _not_ present the paper to an audience. This is also a bit of a travesty.
The community would be better off working with established journals so that they take reviews from A* conferences as an informal first round, giving authors a clear path to publication. Even though top conferences will always have their appeal, the writing is on the wall that this model is unsustainable.
NeurIPS scoring system is inherently subjective. People will have wildly different interpretations of, say, 3 vs 4, or 4 vs 5. You can get lucky and draw only reviewers that, on average, "overrate" papers in their batch. The opposite can happen too, obviously. 4444 vs 3444 is just noise.
> The length of tasks AI can do is doubling every 7 months
The claim is "At time t0, an AI can solve a task that would take a human 2 minutes. At time t0+dt, they can solve 4-minutes tasks. At time t0+2dt, it's 8 minutes" and so on.
I still find these claims extremely dubious, just wanted to clarify.
Yes, I get that, I did allow for it in my original comment. I remain convinced this is a gibberish metric - there is probably no such thing as "a task that would take a human 2 minutes", and certainly no such thing as "an AI that can do every task that would take a human 2 minutes".
Besides, a lot of these walled chat gardens roll their own XMPP/Jabber thingy behind the scenes.
reply