Hacker News new | past | comments | ask | show | jobs | submit | spion's comments login

Vibe-wise, it seems like progress is slowing down and recent models aren't substantially better than their predecessors. But it would be interesting to take a well-trusted benchmark and plot max_performance_until_date(foreach month). (Too bad aider changed recently and there aren't many older models; https://aider.chat/docs/leaderboards/by-release-date.html has not been updated in a while with newer stuff, and the new benchmark doesn't have the classic models such as 3.5, 3.5 turbo, 4, claude 3 opus)

I think that we can't expect continuous progress either, though. Often in computer science it's more discrete, and unexpected. Computer chess was basically stagnant until one team, even the evolution of species often behaves in a punctuated way rather than as a sum of many small adaptations. I'm much more interested (worried) of what the world will be like in 30 years, rather than in the next 5.

Its hard to say. Historically new discoveries in AI often generated great excitement and high expectations, followed by some progress, then stalling, disillusionment and AI winter. Maybe this time it will be different. Either way what was achieved so far is already a huge deal.

Why dagger and not just... any language? (Nushell for example https://www.nushell.sh/)


Because I'm typically building and running tests in containers in CI, which is what dagger is for.

nu is my default shell. Note that I am not talking about dagger shell. https://dagger.io/blog/a-shell-for-the-container-age-introdu...


This is why TC39 needs to work on fundamental language features like protocols. In Rust, you can define a new trait and impl it for existing types. This still has flaws (orphan rule prevents issues but causes bloat) but it would definitely be easier in a dynamic language with unique symbol capabilies to still come up with something.


Dynamic languages don't need protocols. If you want to make an existing object "conform to AsyncDisposable", you:

    function DisposablImageBitmap(bitmap) {
      bitmap[Symbol.dispose] ??= () => bitmap.close()
      return bitmap
    }
    
    using bitmap = DisposableObserver(createImageBitmap(image))
Or if you want to ensure all ImageBitmap conform to Disposable:

    ImageBitmap.prototype[Symbol.dispose] = function() { this.close() }
But this does leak the "trait conformance" globally; it's unsafe because we don't know if some other code wants their implementation of dispose injected to this class, if we're fighting, if some key iteration is going to get confused, etc...

How would a protocol work here? To say something like "oh in this file or scope, `ImageBitmap.prototype[Symbol.dispose]` should be value `x` - but it should be the usual `undefined` outside this scope"?


You could potentially use the module system to bring protocol implementations into scope. This could finally solve the monkey-patching problem. But its a fairly novel idea, TC39 are risk-averse, browser-side are feature-averse and the language has complexities that create issues with most of the more interesting ideas.


Isn't disconnecting a resize observer a poor example of this feature?


I couldn't come up with a reasonable one off the top of my head, but it's for illustration - please swap in a better web api in your mind

(edit: changed to ImageBitmap)


How are async closures / closure types, especially WRT future pinning?


While I'd like to have it, it doesn't stop me from writing a great deal of production code without those niceties.

When it came time for me to undo all the async-trait library hack stuff I wrote after the feature landed in stable, I realized I wasn't really held back by not having it.


Async closures landed in stable recently and have been a nice QoL improvement, although I had gotten used to working around their absence well enough previously that they haven’t been revolutionary yet from the like “enabling new architectural patterns” perspective or anything like that.

I very rarely have to care about future pinning, mostly just to call the pin macro when working with streams sometimes.


I'm an incredibly happy user of nushell, which brings all the best features of other shells terse pipelining syntax and all the best features of more well designed scripting languages (functions, modules, lexical scope, data structures, completely optional types) in one awesome package that also comes with editor (LSP) support and excellent documentation

https://www.nushell.sh/

(The intro page may be a bit misleading. You can freely mix-and-match existing, unstructured as well as nushell-built-in structured commands in the pipeline, as long as you convert to/from string streams - its not mandatory to use the structured built-ins. For example if an existing cli tool has json output, you can use `tool | from json` to turn it into structured data. There are also commands like `detect columns` that parses classic column output, and so on - the tools to mix-and-match structured and unstructured data are convenient and expressive)

Some highlights:

- automatic command line arguments and help by defining a main function and adding comments to each argument - e.g. https://github.com/nushell/nushell/discussions/11969

- run commands with controlled parallelism: https://www.nushell.sh/commands/docs/par-each.html

- easy parsing of raw input https://www.nushell.sh/commands/docs/parse.html

- support for a wide variety of data formats https://www.nushell.sh/commands/categories/formats.html

- built-in support for talking to SQLite databases: https://www.nushell.sh/book/loading_data.html#sqlite

edit: it looks like Mitchell Hashimoto was recently impressed too https://x.com/mitchellh/status/1907849319052386577 - rich functional programming library that blends with pipeline syntax https://www.nushell.sh/book/nushell_map_functional.html

Addendum: Its not my login shell. I run it ad-hoc as soon as the command pipeline i'm writing starts getting too complicated, or to write scripts (which of course can be run from outside nushell too, so long as they have the correct shebang)


Google doesn't really say it can't find an answer; instead it finds less relevant (irrelevant) search results. LLMs hallucinate, while search engines display irrelevance.


Indeed, cognitive load is not the only thing that matters. Non-cognitive toil is also a problem and often enough it doesn't get sufficient attention even when things get really bad.

We do need better code review tools though. We also need to approach that process as a mechanism of effectively building good shared understanding about the (new) code, not just "code review".


> My react code now doesn't look very different to my react code in 2019

Server components, Next application directory breaking emotion for styling, stricter rules for hooks, the compiler breaking change tracking libraries such as MobX removing the need for useMemo and similar. This is just off the top of my head in React land.


You're complaining about MPA. Maybe if you choose SPA with vanilla React you would've felt that for the last 8 years there's been a flurry of conversation and innovation that's specifically not about SPA. The conversation around SPA has been frozen for nearly a decade.

Unless maybe you want to start talking about CRDTs? But that's kind of niche, and CRDTs are actually an old conversation, one which started before React was first released. That should give you a sense at the pace of conversation for SPAs.


Also, because next broke emotion, toolkits such as Mantine broke backward compatibility in order to move off of emotion, so second order instability effects still affect SPAs


react compiler breaking mobx is purely spa; progressively stricter rules of hooks too. Its bad.


React Compiler is in true beta™ mode at the moment, and you receive plenty of warning. It's basically not released.


If you have a very large project which uses MobX, I don't think the amount of warning matters. You'll most likely have to (eventually) completely rewrite large amount of code, to the point where it might be easier to migrate to SolidJS.


If you have a large project then it's your victory or fault for whatever comes of using software marked as not ready. The React team has been absolutely clear that it's not ready.

On an aside, SolidJS has been taking a huge momentum hit recently, I think the drop in downloads is a lagging indicator. It's not an easy space, I love SolidJS and recognize that Ryan Carniato likely took a career hit to work on Solid, but my bet is that Solid will fail to catch critical mass. Really sad as I love the Solid experience.

https://npmtrends.com/astro-vs-solid-js


> Server components

Nope, not using.

> Next application directory breaking emotion for styling

I'm not even sure what this is.

> stricter rules for hooks

I've always been strict with the code, no changes have been required.

> the compiler

The compiler is still in beta, I've not even tried it, let alone let it near production code. I'm not that excited by it tbh.

So you've pretty much confirmed what I said. None of these things were that important to jump on.


Important or not, libraries you depend on may jump on it and abandon backward compatibility


They may. One of the promises of the compiler though is that you don't need to change how you write your code. It will determine how to run.


That promise doesn't really make sense to me - although partial compilation will likely ameliorate most migraiton pain for library dependencies if they come pre-compiled with the compiler.

Code written with the react compiler in mind will inevitably be MUCH slower without the compiler, as its not going to manually add any memoization. Likely to the point where its not going to be usable without it.


> Implementing a basic "useState" and "runFunctionComponent" is easier than most third-semester CS assignments.

I'm not sure it is. There is a global dispatcher that finds the right component state to use and increments a counter to ensure that the correct state "slot" for that component is used. There are also a bunch of mechanisms to prevent slot mismatches. I guess you could make a loose analogy with writing custom allocators, except instead of the binding knowing the address, the order of calling useState determines the address :) Really not that simple.


> I'm not sure it is. There is a global dispatcher that finds the right component state to use and increments a counter to ensure that the correct state "slot" for that component is used.

Explicitly not part of that proposed task. We're talking about a basic hooks implementation with useState here, not how react deals with its tree of components.

For the sake of the task the signature of runFunctionComponent just needs to be

    runFunctionComponent<P, R>(fc: (props: P) => R, props: P, hooks?: any[]): { hooks: any[], returnValue: R }
or even dumbed down to

    runWithHooks(runFC: () => void, hooks?: any[]): any[]
where runFC would wrap the call to the function component to pass the appropriate props and store the return value, freeing students from thinking about that. Also specify the function doesn't need to be re-entrant.

In either case it won't need concern itself with what the component returns (children, whatever), matching children to tree nodes, or scheduling, at all. Would be nonsense to cram that logic in there anyhow.

What you were talking about would make a good later task: easy to get a basic implementation, but hard to get an implementation that isn't subtly wrong in some way. Plus you can slap on extra goals like supporting keys. Lots of room to score students beyond just pass/fail.


IIRC reading the React source code became quite a feat around the time fibers were introduced.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: