This is already true for just UI vs. API. It’s incredible that we weren’t willing to put the effort into building good APIs, documentation, and code for our fellow programmers, but we are willing to do it for AI.
I think this can kinda be explained by the fact that agentic AI more or less has to be given documentation in order to be useful, whereas other humans working with you can just talk to you if they need something. There's a lack of incentive in the human direction (and in a business setting that means priority goes to other stuff, unfortunately).
In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).
> agentic AI more or less has to be given documentation in order to be useful, whereas other humans working with you can just talk to you if they need something. ... In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).
Another framing: documentation is talking to the AI, in a world where AI agents won't "admit they need help" but will read documentation. After all, they process documentation fundamentally the same way they process the user's request.
I also think it makes a difference that an AI agent can read the docs very quickly, and don't typically care about formatting and other presentation-level things that humans have to care about, whereas a human isn't going to read it all, and may read very little of it. I've been at places where we invested substantial time documenting things, only to have it be glanced at maybe a couple of times before becoming outdated.
The idea of writing docs for AI (but not humans) does feel a little reflexively gross, but as Spock would say, it does seem logical
The feedback loop from potential developer users of your API is excruciatingly slow and typically not a process that an API developer would want to engage in. Recruit a bunch of developers to read the docs and try it out? See how they used it after days/weeks? Ask them what they had trouble with? Organize a hackathon? Yuck. AI, on the other hand, gives you immediate feedback as to the usability of your “UAI”. It makes something, in under a minute, and you can see what mistakes it made. After you make improvements to the docs or API itself, you can effectively wipe its memory by cleaning out the context, and see if what you did helped. It’s the difference between debugging a punchcard based computing system and one that has a fully featured repl.
Yeah, this is so true. Well designed APIs are also already almost good enough for AI. There really was always a ton of value in good API design before LLMs. Yet a lot of people still said, for varying reasons, let's just ship slop and focus elsewhere.
After reading the README, the only missing thing seems to be the equivalent of Dataview from Obsidian. Will wait for something like it before considering switching.
Wow this is such a great idea. Also the controls on mobile were top notch. No issues with random zooming, text selection, weird scrolling etc. Felt like a downloaded app.
In Clojure this isn't syntax per-se: defn- and defn are both normal identifiers and are defined in the standard library, but, still, I think it's useful precedent for helping us understand how other people have thought about the minus character.
personally, i like that raku goes the other way, with exported bits of the interface explicitly tagged using `is export` (which also allows for the creation of selectably importable subsets of the module through keyed export/import with `is export(:batteries)`/`use TheModule :batteries`, e.g. for a more featureful interface with a cost not every user of the module wants to pay).
it feels more natural to me to explicitly manage what gets exported and how at a different level than the keyword used to define something. i don't dislike rust's solution per se, but if you're someone like me who still instinctually does start-of-line relative searches for definitions, suddenly `fn` and `pub fn` are separate namespaces (possibly without clear indication which has the definition i'm looking for)
Actually, a module can implement any export heuristics by supplying an EXPORT subroutine, which takes positional arguments from the `use` statement, and is expected to return a Map with the items that should be exported. For example:
sub EXPORT() { Map.new: "&frobnicate" => &sum }
would import the core's "sum" routine, but call it "frobnicate" in the imported scope.
Note that the EXPORT sub can also be a multi, if you'd like different behaviour for different arguments.
neat! i've never needed more than i could get away with by just sneaking the base stuff into the mandatory exports and keying the rest off a single arg, but that'll be handy when i do.
For me domain model means capturing as much information about the domain you are modeling in the types and data structures you use. Most of the time that ends up meaning use Unions to make illegal states unrepresentable. For example, I have not seen a database native approach to saving union types to databases. In that case using another domain layer becomes mandatory.
I am not sure whether the videos are representative of real life performance or it is a marketing stunt but sure looks impressive. Reminds of the robot arm in Iron Man 1.
It's an impressive demo but perhaps you are misremembering Jarvis from Iron Man which is not only far faster but is effectively a full AGI system even at that point.
Sorry if this feels pedantic, perhaps it is. But it seems like an analogy that invites pedantry from fans of that movie.
The robot arms in the movie are implied to have their own AIs driving them; Tony speaks to the malfunctioning one directly several times throughout the movie.
Jarvis is AGI, yes, but is not what's being referred to here.
Not specifically trained on but most likely the Vision models have seen it. Vision models like Gemini flash/pro are already good at vision tasks on phones[1] - like clicking on UI elements and scrolling to find stuff etc. The planning of what steps to perform is also quite good with Pro model (slightly worse than GPT 4o in my opinion)
> Go also offers excellent control of memory layout and allocation (both on an object and field level) without requiring that the entire codebase continually concern itself with memory management.
I guess this sheds a bit of light in to the why not rust debate.
I've written parsers and compilers in Rust. I used DAGs in a fairly standard way for stuff that needs it like references. I also work on a complicated VM with a custom memory allocation strategy professionally. Some of my other Rust projects include: a frontend web application that runs in the browser, a client library that has facades for Node, Python, and Ruby, and a Kafka Consumer that handles ~20 GiB/second of data throughput with relatively modest resourcing.
What he's saying here doesn't make any sense. It sounds like they threw in someone who doesn't know Rust at all, didn't bother to ask any questions, didn't reference any existing code, into trying to write custom memory management strategies and data structures and then bounced off the surface. That isn't how you do things in any language, it's bizarre and sounds like Rust was set up to fail. I wouldn't expect this scenario to succeed in any language on a non-trivial project like the TypeScript compiler.
What's even more bizarre is TypeScript actually has better support for ADTs than Golang (which is especially impactful when writing things like type checkers and compilers). I don't even _like_ TypeScript and I can see that. I've written 5-figures-ish of Golang for some work projects like a custom Terraform provider and it's horrific to model just a JSON schema in Golang's type system. Some of the problem is Hashicorp's terrible SDK but not all of it by any means.
Usually the problem is someone just not knowing how to write Rust. The modal subjective experience of writing Rust code as an experienced user is "Python with nice types, async, and executes fast." If you need to handle stuff with weird non-deterministic lifetimes you can use stuff like `slotmap`. If you need DAGs, use `petgraph`. I think pitching Rust as a "low-level systems language" might be misleading some people on how ~95% of Rust users are actually working in the language.
I once was in charge of a legacy C codebase that passed around data files, including to customers all around the world. The “file format” was nothing more than a memory dump of the C data structures. Oh, and the customers were running this software on various architectures (x86, sparc - big endian, alpha, itanium) and these data files had to all be compatible.
Occasionally, we had to create new programs that would do various things and sit in the middle of this workflow - read the data files, write out new ones that the customers could use with no change to their installed software. Because the big bosses could never make up their minds, we at various times used C#, Go, Python, and even C.
They all work just fine in a production environment. Seriously, it’s fine to choose any of them. But C# stands out as having the ugliest and sketchiest code to deal with it. But it works just fine!
More telling though. I used this scenario in interview questions many times. How would you approach it with C#? 99% of the time I would get a blank stare, followed by “I don’t think it’s possible”. If you want developers that can maintain code that can do this, perhaps don’t choose C# :)
Two things that have brought up in interviews. They don't seem to believe that AOT compiled C# is mature enough and can give the best possible performance on all their supported platforms. Their current codebase consists of more or less pure functions acting on simple data structures and since they want the port to be as 1:1 as possible, idiomatic Go is closer to that style than idiomatic C#.
Requiring all typescript users to install the .net runtime will probably kill adoption, especially on linux build servers. It still requires custom microsoft repos, if they're even available for your distro, and is barely upstreamed.
For Go, you just run the binary without any bullshit. This can easily be wrapped in an npm package to keep the current experience (`npm install` and it works) on all platforms.
With M1 Air, Apple had to blow us away. People, including me, had hard time believing Apple's claims and many people were coping by looking at the Keynote charts and assuming that Apple must have tricked everyone by not giving proper scale metrics etc.
When people put their hands on the real device, it was slaying almost everything on the market and soon it was clear that this thing is a revolution.
You don't one up this easily. Apple claims 2X performance improvement over M1 Air and I am sure its mostly true but that M1 Air was so ahead that for a lot of people workloads didn't catch up yet.
At this very moment I have 3 Xcode projects open, Safari has 147 tabs open and its consuming 11GB of my 16GB Ram and my SSD lifetime dropped to 98% due to frequent swap hits and yet I'm perfectly fine with the performance at this very moment and I'm not looking for immediate replacement.
I can't imagine 147 tabs. I have 9 pinned tabs and maybe ... 6 other tabs open if I'm particularly busy. I also turn off my work laptop at the end of the day, because all of my state is restored when this handful of tabs comes back.
Maybe this is just me managing my ADHD, but when I see people with hundreds of tabs open I just can't imagine how they work. Every tab has been mashed down to its favicon and I watch them struggle to find the right one. It seems insane to me.
There are two kinds of people. <10 open tabs and >100 open tabs. Nothing in between.
I think of the >100 ones as people who have completely lost control of their lives. I'm sure they think of me as someone who needs everything to be just so and can't deal with the messiness of real life.
I have several hundred open on my M2 MBA and have no problem. Maybe it's because I use Brave? I don't know but have never had to think too much about it. I also don't have much RAM (either the base amount or up a little).
I do restart my browser once a month or so, if things ever feel less snappy than normal.
In Safari, tabs get small up to a point, then they are scrollable. Sure, there are duplicates but it's usually the homepage of HN or Twitter. I close those when encountered.
Same. Each year I tell myself I'll get the new one. Each year when the new one comes out I notice that for what I use it for my M1 Air is still completely fine.
I did some research and I'm deferring for a semester but tbh my motivation is pretty low. As per perception it seems decent but depending on circumstances it's def a much better idea to do an on campus programme.
This is where I am, too. I have an M1 Pro and I have never loved a computer more. This thing is a beast and just about anything I throw at it is fine. I can't imagine how much better the M4 is. Unless this computer gets stolen or doused with water, I'll probably have it for at least another 3-4 years. Absolutely amazing value for my money.
Nor should you have a reason to replace it. The device is barely 4 years old. There was a time until very recently when laptops would be expected to last 10+ years minimum with minor RAM and SSD updates.
I don’t know when that time was. Hardware and software requirements have been moving fast for just about forever, until actually maybe the past 5 years.
There was never a time when laptops were expected to last 10+ years.
I'm still happily using an 8GB M1 running Firefox in OSX + Firefox/VSCode/NodeJS in a Debian VM. Lots of tabs open. Both OSX and Debian can use compressed RAM.
agreed, which is awesome, the only thing that worries me is that they will drop support for it earlier than they have to when they want to force people to upgrade eventually. I hope to get 10 years out of my M1
Everyone I know that got an M1 cheaped out on the 8gb model and are now struggling to use a browser with heavy sites and multitasking(zoom) at the same time.
But also apples upcharge on RAM is disgusting, so it's hard to blame them for picking the lowest spec model.
Totally an anecdote, but my 8gb M1 runs fine with multiple browsers/tabs, VS Code, and Spotify all open. Usually performance is only an issue for me when working with larger ML models. I wonder why others are getting worse performance? Maybe it's the specific sites they're using?
It isn't depending on what "web browsing" someone is doing, which can be a pretty wide range now.
1 persons "web browsing" is no browser extensions, a couple of gmail tabs, some light blog reading, and maybe something as heavy as reddit.
While another persons "web browsing" is running multiple browser extensions like grammerly, adblocker, etc. Along with a bunch of gmail tabs, plus a bunch of heavy "web apps"(think: miro, monday.com, google workspace/office365, photoshop online) and then throw 10s-100s of tabs of "research" on top of that.
8gb is quickly becoming unworkable for people that fall closer to the latter group.
> While another persons "web browsing" is running multiple browser extensions like grammerly, adblocker, etc. Along with a bunch of gmail tabs, plus a bunch of heavy "web apps"(think: miro, monday.com, google workspace/office365, photoshop online) and then throw 10s-100s of tabs of "research" on top of that.
That's computing, not web browsing. And on not so great platform than that.
Do you use it as a laptop, or is it hooked up as a desktop for the most part? If the former, I'd try one of the M series in the same role and see if you notice a difference in ergonomics.
At this time (and historically), I mostly use it as a laptop, but I have also used it as a desktop for long periods with an external monitor. As a laptop, I love that it's so tiny. It’s working very well so far... but I’m afraid that at some point, I’ll have to switch to Linux or OpenCore Legacy Patcher. I’m still on macOS 11 (Big Sur).
MS Office has already stopped updating, along with some other software (though not much, most still updates without issues). As long as Firefox keeps receiving updates for my system, most things will be fine.
If those don't look like a problem for you, I'd definitely suggest giving it a try. MacOS 13 should give you at least 3 more years of use out of it.
Going beyond MacOS 13 I don't think is worth it. MacOS 14 is noticeably slower on my 2010 iMac, and there aren't any new features it can take advantage of anyways.