People talk as if dynamic scoping was objectively a mistake, but the fact that it works well and is really useful in a complex piece of software like Emacs seems to suggest otherwise.
original opposition to lexical binding in Lisp circles was that lexical would be slower. That turned out to be false.
Emacs Lisp explicitly kept to dynamic binding for everything because it made for simpler overriding of functions deep in, but resulted in lower performance and various other issues, and ultimately most benefit from such shadowing seems to be focus of defadvice and the like.
I can understand why that objection would be raised, because lexical binding is slower in code that is interpreted rather than compiled, compared to (shallow) dynamic binding. Under shallow dynamic binding, there isn't a chained dynamic environment structure. Variables are simply global: every variable is just the value cell of the symbol that names it. The value cell can be integrated directly into the representation of a symbol, and so accessing a variable under interpretation very fast, compared to accessing a lexical variable, which must be looked up in an environment structure.
A rather weak argument when you consider what kind of mechanisms (like a digital clock with working seven-segment display) people have been programming / put together in Conway's Game of Life; to me this does not suggest in any way or manner that GoL could ever be my favored platform to simulate a digital clock (or anything more complex than a glider for that matter). Likewise vacuum cleaners and toothbrushes have likely been made hosts for playing doom, and people accomplish all kinds of stuff like quines and working software in brainf*ck. None of these feats are indicative of the respective platform being suitable or the right tool for a sizable number of programmers.
As someone that used it in languages like Clipper, or Emacs Lisp, and ADL rule in C++ templates, cool to make programming tricks of wonder, a pain to debug when something goes wrong several months later.
Few deny the utility of dynamic-style variables for certain kinds of programming. But it can be helpful to segregate that behavior more carefully than in a language where it is the default.
Napster may have started the file sharing revolution. But the exciting part for me was Gnutella and later Bittorrent, peer-to-peer technology in general, and the realization that we could use technology to liberate ourselves. Needless to say, I was young and naïve. That spirit is long dead, and the only remnants are crypto currencies and the community around them, which has tossed all lofty ideals in favor of blind greed.
I think you need to check better: piratebay and so many of older tech is still being used, for many different niches (I find each country having their own preferred way to distribute content in their native language).
Nowadays the only people left are the ones wanting to put the extractor effort for the principle of it, the rest can easily find most things somewhere (and if not give up). But there are still lots of us! (or so it feels like)
napster asked me if I wanted to work on his project, and I told him I saw no future because it would inevitably just get shut down. From my perspective looking back, Napster was the start of the trend of startup companies based around brazenly flouting the law directly as a middleman, then when finally called out just bargaining with the incumbents to arbitrage their user base and associated hipness for a payout.
I still don't think the dream of techno-liberation is dead. Rather the naive bits were thinking the sea change would happen so quickly, and thinking that the same old type of vectoralist hucksters wouldn't seek to corrupt our new systems. In actuality our systems need to be designed with the perspective of all possible gatekeepers as attackers. For example take the End to End principle - it's not sufficient to merely tell the network to not meddle with your communications, those communications must be cryptographically protected to avoid any temptation. Otherwise as you commit an increasing amount of value to your use of the network, it merely becomes a question of when the network operators will eventually try to take advantage of you to extract some of that value.
The big issue these days is there is so much capital funding "startups" that are essentially centralized crud apps running that same pump-and-dump arbitrage playbook. They buy lots of advertising and other mindshare (cf "It is difficult to get a man to understand something...") and generally use up most of the air in the room.
Japan's initiative is focused on "green open access" which is different from pay-to-publish. I recommend the section titled "Green OA" in the submitted article. Relevant quote:
"Japan’s move to greater access to its research is focusing on 'green OA' — in which authors make the author-accepted, but unfinalized, versions of papers available in the digital repositories, says Seiichi.
Seiichi says that gold OA — in which the final copyedited and polished version of a paper is made freely available on the journal site — is not feasible on a wide scale. That’s because the cost to make every paper free to read would be too high for universities."
I think it applies to most STEM fields. I'm a reviewer for several journals in a STEM field (not AI/ML specifically, but some manuscripts do try to apply AI/ML to this field) and the vast majority of authors seem to upload their preprints to arxiv etc.
Social sciences may be behind though as you say, I do not know as I'm not in that field.
As someone with graduate degrees in both STEM (math) and social science (psychology) fields, it's true that social science is way behind STEM in terms of preprints to digital archives. It's possible there's momentum here in the last 5 or so years that I'm unaware of though.
That matters for a few reasons:
(1) the average person more frequently encounters psychological, social and medical issues more than they do math problems. And since the research in those fields tends to be pay walled, people are at the mercy of things like SEO spam medical and health sites.
(2) wrong ideas in medicine or psychology can (and have in the past) damaged entire generations of people. So in that sense their blast radius can be very large. This means that peer review is especially important and that there's a potential negative externality to posting preprints and drafts before they're finalized. I suspect we'd have to solve the peer review and quality problem before STEM-style preprint archives become the norms in all fields.
For Bioengineering it definetly is, but a lot of Medicine is still locked behind high impact Wiley and Sage publications, and for a lot of that research it's fairly easy to pay the $3-4k to make the article open access.
The truth is that OA is a childish illusion that got “absorbed” by the adults in the room who tapped the kids on their back and said “no worries, we’ll take it from here”. Then they turned traditional publishing, which was already an elaborate expensive ruse, into OA which is an even more expensive (but less elaborate) ruse. Now everyone is happy, except someone trying to do actual research and having to read 1000 meaningless papers a day.
Green OA is just as meaningless as—-or worse than—-Gold OA. You pay the publisher a large sum of money for the “right” to self-publish the preprint, and they still paywall it. The vast majority of people will find the paywalled version before yours, and anyway there is no guarantee that your preprint is accurate with the final, published version, so most people will still trust the paywalled version more than your PDF. Especially when performing systematic literature reviews where you need to document the sources of your references.
The current implementation of OA (any of them) is basically a self-fulfilling prophecy: we convinced ourselves that “publishers are evil” and impossible to get rid of, and now we are paying them so that they don’t have to do their job. We basically retired publishers early with an extra pension, all because everyone “wants to believe” in open access. But guess what? This is not disruptive. OA is just as “capitalist evil” as the usual publishing, or even more so. Do you want to be disruptive? The disrupt. Get rid of the publishers. Or at least constrain funding only to not-for-profits for example.
> You pay the publisher a large sum of money for the “right” to self-publish the preprint, and they still paywall it
There's no APC with Green OA, so what money are you talking about? Green OA is regular publishing, but with self archiving. There will be a version freely available, and the publishers aren't paid for that privilege.
If you want a route to the death of publishers, green OA is a promising one.
(I think the headline ought to emphasise that this is pushing green OA, which is the interesting bit)
Ah, I somewhat see the confusion. Green OA doesn't require the publisher to publish other versions for free, it just means you are allowed to publish them. Typically you'd publish them via an institutional repository, preprint server (often discipline specific), or in one of a number of free online services.
"there is no guarantee that your preprint is accurate with the final, published version, so most people will still trust the paywalled version more than your PDF"
I think this is backwards. The definitive version that should be cited is the freely-available one, since that is the version that everyone can read. No one should cite the paywalled version.
A single word is insufficient evidence to conclude that an LLM was used. "Delve" may be low frequency in naturalistic text but there are many words in an article and the chance that some of them will be low-frequency is high. I also checked in my bibliography and found that "delve" is actually not super rare in academic papers including those written before LLMs.
My wife is still using her 2012 MBP. We maxed out RAM and gave it an SSD in 2016. She uses it for video editing and music production. The thing look like new. Completely ridiculous. Only downside: no OSX updates since I don’t know when.
You might find OpenCore Legacy Patcher[1] worth a look. In many cases, it allows later-that-supported Mac OS versions to be installed on older Macs.
As a data point, I still use a 2013 Mac Pro as my primary desktop, and I've been using Sonoma on it for several months, have been able to install all Sonoma patches over-the-air on release without incident, and have only experienced a single, trivial problem: the right side of the menu bar occasionally appears shaded red, in a way that doesn't affect usability; switching applications immediately resolves the problem (the problem appears to be correlated with video playback).
video encoder/decoder support and performance has order of magnitude improvement in M series, I am surprised that didnt sway you.
Not just that, for high res stuff or modern codecs like AV1 or h265 is probably not supported at all in a 2012 device without updates for so long?
Even if support was possible it would be software encoding and even short clip it can take hours to render ?
I would happily use an older device for development a lot of dev work especially if not frontend or UI usually i can use any laptop just as a terminal, but UI or video editing I wouldn’t be able to.
When you consider that Germany closed 4GW of nuclear power generation, then few MW (Don't you think MWh?) projects are magnitudes off what is necessary.
> it took multiple decades for cars to really go mainstream.
I think you are trying to insinuate that electric cars are decades away from today. The first electric car was built decades _ago_. Electric cars are mainstream today. So I suppose you can argue that it is a similar timeline, we are just decades after Bertha Benz' famous drive.
> Globally, around 1-in-4 new cars sold were electric in 2023. In Norway, this share was over 90%, and in China, it was almost 40%.