Vibe-wise, it seems like progress is slowing down and recent models aren't substantially better than their predecessors. But it would be interesting to take a well-trusted benchmark and plot max_performance_until_date(foreach month). (Too bad aider changed recently and there aren't many older models; https://aider.chat/docs/leaderboards/by-release-date.html has not been updated in a while with newer stuff, and the new benchmark doesn't have the classic models such as 3.5, 3.5 turbo, 4, claude 3 opus)
I think that we can't expect continuous progress either, though. Often in computer science it's more discrete, and unexpected. Computer chess was basically stagnant until one team, even the evolution of species often behaves in a punctuated way rather than as a sum of many small adaptations. I'm much more interested (worried) of what the world will be like in 30 years, rather than in the next 5.
Its hard to say. Historically new discoveries in AI often generated great excitement and high expectations, followed by some progress, then stalling, disillusionment and AI winter. Maybe this time it will be different. Either way what was achieved so far is already a huge deal.
This is why TC39 needs to work on fundamental language features like protocols. In Rust, you can define a new trait and impl it for existing types. This still has flaws (orphan rule prevents issues but causes bloat) but it would definitely be easier in a dynamic language with unique symbol capabilies to still come up with something.
But this does leak the "trait conformance" globally; it's unsafe because we don't know if some other code wants their implementation of dispose injected to this class, if we're fighting, if some key iteration is going to get confused, etc...
How would a protocol work here? To say something like "oh in this file or scope, `ImageBitmap.prototype[Symbol.dispose]` should be value `x` - but it should be the usual `undefined` outside this scope"?
You could potentially use the module system to bring protocol implementations into scope. This could finally solve the monkey-patching problem. But its a fairly novel idea, TC39 are risk-averse, browser-side are feature-averse and the language has complexities that create issues with most of the more interesting ideas.
While I'd like to have it, it doesn't stop me from writing a great deal of production code without those niceties.
When it came time for me to undo all the async-trait library hack stuff I wrote after the feature landed in stable, I realized I wasn't really held back by not having it.
Async closures landed in stable recently and have been a nice QoL improvement, although I had gotten used to working around their absence well enough previously that they haven’t been revolutionary yet from the like “enabling new architectural patterns” perspective or anything like that.
I very rarely have to care about future pinning, mostly just to call the pin macro when working with streams sometimes.
I'm an incredibly happy user of nushell, which brings all the best features of other shells terse pipelining syntax and all the best features of more well designed scripting languages (functions, modules, lexical scope, data structures, completely optional types) in one awesome package that also comes with editor (LSP) support and excellent documentation
(The intro page may be a bit misleading. You can freely mix-and-match existing, unstructured as well as nushell-built-in structured commands in the pipeline, as long as you convert to/from string streams - its not mandatory to use the structured built-ins. For example if an existing cli tool has json output, you can use `tool | from json` to turn it into structured data. There are also commands like `detect columns` that parses classic column output, and so on - the tools to mix-and-match structured and unstructured data are convenient and expressive)
Addendum: Its not my login shell. I run it ad-hoc as soon as the command pipeline i'm writing starts getting too complicated, or to write scripts (which of course can be run from outside nushell too, so long as they have the correct shebang)
Google doesn't really say it can't find an answer; instead it finds less relevant (irrelevant) search results. LLMs hallucinate, while search engines display irrelevance.
Indeed, cognitive load is not the only thing that matters. Non-cognitive toil is also a problem and often enough it doesn't get sufficient attention even when things get really bad.
We do need better code review tools though. We also need to approach that process as a mechanism of effectively building good shared understanding about the (new) code, not just "code review".
> My react code now doesn't look very different to my react code in 2019
Server components, Next application directory breaking emotion for styling, stricter rules for hooks, the compiler breaking change tracking libraries such as MobX removing the need for useMemo and similar. This is just off the top of my head in React land.
You're complaining about MPA. Maybe if you choose SPA with vanilla React you would've felt that for the last 8 years there's been a flurry of conversation and innovation that's specifically not about SPA. The conversation around SPA has been frozen for nearly a decade.
Unless maybe you want to start talking about CRDTs? But that's kind of niche, and CRDTs are actually an old conversation, one which started before React was first released. That should give you a sense at the pace of conversation for SPAs.
Also, because next broke emotion, toolkits such as Mantine broke backward compatibility in order to move off of emotion, so second order instability effects still affect SPAs
If you have a very large project which uses MobX, I don't think the amount of warning matters. You'll most likely have to (eventually) completely rewrite large amount of code, to the point where it might be easier to migrate to SolidJS.
If you have a large project then it's your victory or fault for whatever comes of using software marked as not ready. The React team has been absolutely clear that it's not ready.
On an aside, SolidJS has been taking a huge momentum hit recently, I think the drop in downloads is a lagging indicator. It's not an easy space, I love SolidJS and recognize that Ryan Carniato likely took a career hit to work on Solid, but my bet is that Solid will fail to catch critical mass. Really sad as I love the Solid experience.
That promise doesn't really make sense to me - although partial compilation will likely ameliorate most migraiton pain for library dependencies if they come pre-compiled with the compiler.
Code written with the react compiler in mind will inevitably be MUCH slower without the compiler, as its not going to manually add any memoization. Likely to the point where its not going to be usable without it.
> Implementing a basic "useState" and "runFunctionComponent" is easier than most third-semester CS assignments.
I'm not sure it is. There is a global dispatcher that finds the right component state to use and increments a counter to ensure that the correct state "slot" for that component is used. There are also a bunch of mechanisms to prevent slot mismatches. I guess you could make a loose analogy with writing custom allocators, except instead of the binding knowing the address, the order of calling useState determines the address :) Really not that simple.
> I'm not sure it is. There is a global dispatcher that finds the right component state to use and increments a counter to ensure that the correct state "slot" for that component is used.
Explicitly not part of that proposed task. We're talking about a basic hooks implementation with useState here, not how react deals with its tree of components.
For the sake of the task the signature of runFunctionComponent just needs to be
where runFC would wrap the call to the function component to pass the appropriate props and store the return value, freeing students from thinking about that. Also specify the function doesn't need to be re-entrant.
In either case it won't need concern itself with what the component returns (children, whatever), matching children to tree nodes, or scheduling, at all. Would be nonsense to cram that logic in there anyhow.
What you were talking about would make a good later task: easy to get a basic implementation, but hard to get an implementation that isn't subtly wrong in some way. Plus you can slap on extra goals like supporting keys. Lots of room to score students beyond just pass/fail.
reply