Hacker Newsnew | past | comments | ask | show | jobs | submit | metayrnc's commentslogin

This is already true for just UI vs. API. It’s incredible that we weren’t willing to put the effort into building good APIs, documentation, and code for our fellow programmers, but we are willing to do it for AI.


I think this can kinda be explained by the fact that agentic AI more or less has to be given documentation in order to be useful, whereas other humans working with you can just talk to you if they need something. There's a lack of incentive in the human direction (and in a business setting that means priority goes to other stuff, unfortunately).

In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).


> agentic AI more or less has to be given documentation in order to be useful, whereas other humans working with you can just talk to you if they need something. ... In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).

Another framing: documentation is talking to the AI, in a world where AI agents won't "admit they need help" but will read documentation. After all, they process documentation fundamentally the same way they process the user's request.


I also think it makes a difference that an AI agent can read the docs very quickly, and don't typically care about formatting and other presentation-level things that humans have to care about, whereas a human isn't going to read it all, and may read very little of it. I've been at places where we invested substantial time documenting things, only to have it be glanced at maybe a couple of times before becoming outdated.

The idea of writing docs for AI (but not humans) does feel a little reflexively gross, but as Spock would say, it does seem logical


The feedback loop from potential developer users of your API is excruciatingly slow and typically not a process that an API developer would want to engage in. Recruit a bunch of developers to read the docs and try it out? See how they used it after days/weeks? Ask them what they had trouble with? Organize a hackathon? Yuck. AI, on the other hand, gives you immediate feedback as to the usability of your “UAI”. It makes something, in under a minute, and you can see what mistakes it made. After you make improvements to the docs or API itself, you can effectively wipe its memory by cleaning out the context, and see if what you did helped. It’s the difference between debugging a punchcard based computing system and one that has a fully featured repl.


Yeah, this is so true. Well designed APIs are also already almost good enough for AI. There really was always a ton of value in good API design before LLMs. Yet a lot of people still said, for varying reasons, let's just ship slop and focus elsewhere.


We are only willing to have the Llm generate it for AI. Don’t worry people are writing and editing less.

And all those tenets of building good APIs, documentation, and code are opposite the incentive of building enshittified APIs, documentation, and code.


Is there a link showing the email with the prompt?


After reading the README, the only missing thing seems to be the equivalent of Dataview from Obsidian. Will wait for something like it before considering switching.


Speaking of which, have you seen the new Bases feature in Obsidian? https://help.obsidian.md/bases

Reminiscent of Dataview.


That looks awesome!


Highly recommend this youtube channel for anyone interested in the problem solving capabilities of these birds.

https://youtu.be/A5YyTHyaNpo?si=cLj4e4heV7kiXq5v


Wow this is such a great idea. Also the controls on mobile were top notch. No issues with random zooming, text selection, weird scrolling etc. Felt like a downloaded app.


I like the final approach. What about

-def sayHi()

Or

def- sayHi()

I feel like having a minus communicates the intend of taking the declaration out of the public exports of a module.


There's some prior art here from Clojure, where defn- creates private definitions and defn public ones:

https://clojuredocs.org/clojure.core/defn-

In Clojure this isn't syntax per-se: defn- and defn are both normal identifiers and are defined in the standard library, but, still, I think it's useful precedent for helping us understand how other people have thought about the minus character.


That's clever way to think of "-". :) I'll think about that.


It might be from me being so used to it, but I do like Elixir’s `def`/`defp` second best to Rust’s `pub`


personally, i like that raku goes the other way, with exported bits of the interface explicitly tagged using `is export` (which also allows for the creation of selectably importable subsets of the module through keyed export/import with `is export(:batteries)`/`use TheModule :batteries`, e.g. for a more featureful interface with a cost not every user of the module wants to pay).

it feels more natural to me to explicitly manage what gets exported and how at a different level than the keyword used to define something. i don't dislike rust's solution per se, but if you're someone like me who still instinctually does start-of-line relative searches for definitions, suddenly `fn` and `pub fn` are separate namespaces (possibly without clear indication which has the definition i'm looking for)


Actually, a module can implement any export heuristics by supplying an EXPORT subroutine, which takes positional arguments from the `use` statement, and is expected to return a Map with the items that should be exported. For example:

    sub EXPORT() { Map.new: "&frobnicate" => &sum }
would import the core's "sum" routine, but call it "frobnicate" in the imported scope.

Note that the EXPORT sub can also be a multi, if you'd like different behaviour for different arguments.


neat! i've never needed more than i could get away with by just sneaking the base stuff into the mandatory exports and keying the rest off a single arg, but that'll be handy when i do.


For me domain model means capturing as much information about the domain you are modeling in the types and data structures you use. Most of the time that ends up meaning use Unions to make illegal states unrepresentable. For example, I have not seen a database native approach to saving union types to databases. In that case using another domain layer becomes mandatory.

For context: https://fsharpforfunandprofit.com/posts/designing-with-types...


I am not sure whether the videos are representative of real life performance or it is a marketing stunt but sure looks impressive. Reminds of the robot arm in Iron Man 1.


AI demos and even live presentations have exacerbated my trust issues. The tech has great uses but there is no modesty from the proprieters.


Google in particular has had some egregiously fake AI demos in the past.


> Reminds of the robot arm in Iron Man 1.

It's an impressive demo but perhaps you are misremembering Jarvis from Iron Man which is not only far faster but is effectively a full AGI system even at that point.

Sorry if this feels pedantic, perhaps it is. But it seems like an analogy that invites pedantry from fans of that movie.


The robot arms in the movie are implied to have their own AIs driving them; Tony speaks to the malfunctioning one directly several times throughout the movie.

Jarvis is AGI, yes, but is not what's being referred to here.


Ah good point!


i thought it was really cool when it picked up the grapes by the vine

edit: it didn't.


Here it looks like its squeezing a grape instead: https://www.youtube.com/watch?v=HyQs2OAIf-I&t=43s Bit hard to tell whether it remained intact.


The leaf on the darker grapes looks like a fabric leaf, I'd kinda bet they're all fake for these demos / testing.

Don't need the robot to smash a grape when we can use a fake grape that won't smash.


The bananas are clearly plastic and make a "doink" nose when dropped into the bowl.


Haha show the whole room and work either on a concrete floor or a transparent table.

This video reeks of the same shenanigans as perpetual motion machine videos.


welp i guess i should get my sight checked


And how it just dropped the grapes, as well as the banana. If they were real fruits, you wouldn't want that to happen.


I remember a cartoon where a quality inspection guy smashes bananas with a "certified quality" stamp before they go into packaging.


[flagged]


This is, nearly exactly, like saying you've seen screens slowly display text before, so you're not impressed with LLM.

How it's doing it is the impressive part.


the difference is the dynamic nature of things here.

Current arms and their workspaces are calibrated to mm. Here it's more messy.

Older algorithms are more brittle than having a model do it.


For the most part that's been on known objects, these are objects it has not seen.


Not specifically trained on but most likely the Vision models have seen it. Vision models like Gemini flash/pro are already good at vision tasks on phones[1] - like clicking on UI elements and scrolling to find stuff etc. The planning of what steps to perform is also quite good with Pro model (slightly worse than GPT 4o in my opinion)

1. A framework to control your phone using Gemini - https://github.com/BandarLabs/clickclickclick


That's a really cool framework you've linked.


> Go also offers excellent control of memory layout and allocation (both on an object and field level) without requiring that the entire codebase continually concern itself with memory management.

I guess this sheds a bit of light in to the why not rust debate.


I've written parsers and compilers in Rust. I used DAGs in a fairly standard way for stuff that needs it like references. I also work on a complicated VM with a custom memory allocation strategy professionally. Some of my other Rust projects include: a frontend web application that runs in the browser, a client library that has facades for Node, Python, and Ruby, and a Kafka Consumer that handles ~20 GiB/second of data throughput with relatively modest resourcing.

What he's saying here doesn't make any sense. It sounds like they threw in someone who doesn't know Rust at all, didn't bother to ask any questions, didn't reference any existing code, into trying to write custom memory management strategies and data structures and then bounced off the surface. That isn't how you do things in any language, it's bizarre and sounds like Rust was set up to fail. I wouldn't expect this scenario to succeed in any language on a non-trivial project like the TypeScript compiler.

What's even more bizarre is TypeScript actually has better support for ADTs than Golang (which is especially impactful when writing things like type checkers and compilers). I don't even _like_ TypeScript and I can see that. I've written 5-figures-ish of Golang for some work projects like a custom Terraform provider and it's horrific to model just a JSON schema in Golang's type system. Some of the problem is Hashicorp's terrible SDK but not all of it by any means.

Usually the problem is someone just not knowing how to write Rust. The modal subjective experience of writing Rust code as an experienced user is "Python with nice types, async, and executes fast." If you need to handle stuff with weird non-deterministic lifetimes you can use stuff like `slotmap`. If you need DAGs, use `petgraph`. I think pitching Rust as a "low-level systems language" might be misleading some people on how ~95% of Rust users are actually working in the language.


Let's face it, Rust is better for some things (low level programming) than other (not so low level programming).

You pay a price in accidental complexity when using Rust and accordingly get some benefit for it (performance).

And hearing these kind of reasoning when picking from Anders Hejlsberg (C# & Typescript creator) makes me more of a fan of him.


yeah, the whole thought process (imo) makes a lot of sense; and I like that they experimented in a few languages first to see how it would fit


C# has both of those capabilities and more The answer doesn't make sense


I once was in charge of a legacy C codebase that passed around data files, including to customers all around the world. The “file format” was nothing more than a memory dump of the C data structures. Oh, and the customers were running this software on various architectures (x86, sparc - big endian, alpha, itanium) and these data files had to all be compatible.

Occasionally, we had to create new programs that would do various things and sit in the middle of this workflow - read the data files, write out new ones that the customers could use with no change to their installed software. Because the big bosses could never make up their minds, we at various times used C#, Go, Python, and even C.

They all work just fine in a production environment. Seriously, it’s fine to choose any of them. But C# stands out as having the ugliest and sketchiest code to deal with it. But it works just fine!

More telling though. I used this scenario in interview questions many times. How would you approach it with C#? 99% of the time I would get a blank stare, followed by “I don’t think it’s possible”. If you want developers that can maintain code that can do this, perhaps don’t choose C# :)


Ask anyone maintaining even remotely interesting project in C# on GitHub and the answers would have likely surprised you.


Two things that have brought up in interviews. They don't seem to believe that AOT compiled C# is mature enough and can give the best possible performance on all their supported platforms. Their current codebase consists of more or less pure functions acting on simple data structures and since they want the port to be as 1:1 as possible, idiomatic Go is closer to that style than idiomatic C#.

See this thread https://news.ycombinator.com/item?id=43332830 for much more discussion.


When you code using regular OOP it's def not the case. Go struct memory layout is very straightforward.


Requiring all typescript users to install the .net runtime will probably kill adoption, especially on linux build servers. It still requires custom microsoft repos, if they're even available for your distro, and is barely upstreamed.

For Go, you just run the binary without any bullshit. This can easily be wrapped in an npm package to keep the current experience (`npm install` and it works) on all platforms.


Modern .NET usually ships the runtime or embeds it inside the binary. This is very different from the old Windows-only .NET Framework.



c# also has a runtime you have to ship with any binaries


So is Java.


After getting the initial M1 Air, I am still struggling to find a reason to replace it. Still going strong with no hiccups!


With M1 Air, Apple had to blow us away. People, including me, had hard time believing Apple's claims and many people were coping by looking at the Keynote charts and assuming that Apple must have tricked everyone by not giving proper scale metrics etc.

When people put their hands on the real device, it was slaying almost everything on the market and soon it was clear that this thing is a revolution.

You don't one up this easily. Apple claims 2X performance improvement over M1 Air and I am sure its mostly true but that M1 Air was so ahead that for a lot of people workloads didn't catch up yet.

At this very moment I have 3 Xcode projects open, Safari has 147 tabs open and its consuming 11GB of my 16GB Ram and my SSD lifetime dropped to 98% due to frequent swap hits and yet I'm perfectly fine with the performance at this very moment and I'm not looking for immediate replacement.


I can't imagine 147 tabs. I have 9 pinned tabs and maybe ... 6 other tabs open if I'm particularly busy. I also turn off my work laptop at the end of the day, because all of my state is restored when this handful of tabs comes back.

Maybe this is just me managing my ADHD, but when I see people with hundreds of tabs open I just can't imagine how they work. Every tab has been mashed down to its favicon and I watch them struggle to find the right one. It seems insane to me.


There are two kinds of people. <10 open tabs and >100 open tabs. Nothing in between.

I think of the >100 ones as people who have completely lost control of their lives. I'm sure they think of me as someone who needs everything to be just so and can't deal with the messiness of real life.


How do you manage 147 open tabs and why have them all open at once?


I have several hundred open on my M2 MBA and have no problem. Maybe it's because I use Brave? I don't know but have never had to think too much about it. I also don't have much RAM (either the base amount or up a little).

I do restart my browser once a month or so, if things ever feel less snappy than normal.


I think the question is why rather than physically how. Surely that amount of tabs is useless because you'll have duplicates, irrelevant things, etc.


In Safari, tabs get small up to a point, then they are scrollable. Sure, there are duplicates but it's usually the homepage of HN or Twitter. I close those when encountered.

It looks like this: https://a.dropoverapp.com/cloud/download/5cda0c76-9398-475a-...


Really bad habit, when I’m into something I open some tabs and if I switch to something else I keep opening new bunch of tabs.

Once I no longer remember the older tabs I create a tab group from the current tabs in case there’s a tab I care about and start fresh.


Same. Each year I tell myself I'll get the new one. Each year when the new one comes out I notice that for what I use it for my M1 Air is still completely fine.


The real win for me is that Macbook Pro M1 64 GB are now sold on market places within my price range.

So yea, same.


Hey, as for OMSCS.

I did some research and I'm deferring for a semester but tbh my motivation is pretty low. As per perception it seems decent but depending on circumstances it's def a much better idea to do an on campus programme.


Hey, thanks for talking about OMSCS :)


What do you feel is a good price for that?


In my case, 2000 euro's


This is where I am, too. I have an M1 Pro and I have never loved a computer more. This thing is a beast and just about anything I throw at it is fine. I can't imagine how much better the M4 is. Unless this computer gets stolen or doused with water, I'll probably have it for at least another 3-4 years. Absolutely amazing value for my money.


Nor should you have a reason to replace it. The device is barely 4 years old. There was a time until very recently when laptops would be expected to last 10+ years minimum with minor RAM and SSD updates.


I don’t know when that time was. Hardware and software requirements have been moving fast for just about forever, until actually maybe the past 5 years.

There was never a time when laptops were expected to last 10+ years.


I'm still happily using an 8GB M1 running Firefox in OSX + Firefox/VSCode/NodeJS in a Debian VM. Lots of tabs open. Both OSX and Debian can use compressed RAM.


What software are you using for virtualization? I wasn't impressed with the Apple Silicon options last time I looked.


Yeah that. I only got rid of mine because I wanted the nice mini-LED screen on the 14" MBP. No plans to replace that one any time soon!


My M1 Max Mac Studio also feels very good even though it's probably full of dust and cleaning it isn't reasonable.


agreed, which is awesome, the only thing that worries me is that they will drop support for it earlier than they have to when they want to force people to upgrade eventually. I hope to get 10 years out of my M1


Everyone I know that got an M1 cheaped out on the 8gb model and are now struggling to use a browser with heavy sites and multitasking(zoom) at the same time.

But also apples upcharge on RAM is disgusting, so it's hard to blame them for picking the lowest spec model.


Totally an anecdote, but my 8gb M1 runs fine with multiple browsers/tabs, VS Code, and Spotify all open. Usually performance is only an issue for me when working with larger ML models. I wonder why others are getting worse performance? Maybe it's the specific sites they're using?


Could be chrome vs safari or ff


That's me. It's brutal trying to do Unity game dev on this. Constantly run out of memory and can't do much multitasking.


Cries in 4Gb Macbook Air 2013 /s

I am fine(ish) with the above setup, I don't know what you are talking about. 8Gb is plenty for website browsing.


It isn't depending on what "web browsing" someone is doing, which can be a pretty wide range now.

1 persons "web browsing" is no browser extensions, a couple of gmail tabs, some light blog reading, and maybe something as heavy as reddit.

While another persons "web browsing" is running multiple browser extensions like grammerly, adblocker, etc. Along with a bunch of gmail tabs, plus a bunch of heavy "web apps"(think: miro, monday.com, google workspace/office365, photoshop online) and then throw 10s-100s of tabs of "research" on top of that.

8gb is quickly becoming unworkable for people that fall closer to the latter group.


> While another persons "web browsing" is running multiple browser extensions like grammerly, adblocker, etc. Along with a bunch of gmail tabs, plus a bunch of heavy "web apps"(think: miro, monday.com, google workspace/office365, photoshop online) and then throw 10s-100s of tabs of "research" on top of that.

That's computing, not web browsing. And on not so great platform than that.


well those are really apps, the fact that they are running in a browser does not make that browsing.


That's a rough era, new enough to have soldered RAM and old enough that Apple felt ok with 4GB in a base model.


After getting my 2015 macbook air 11' I am still struggling to find a reason to replace it. Still going strong with no hiccups!


Do you use it as a laptop, or is it hooked up as a desktop for the most part? If the former, I'd try one of the M series in the same role and see if you notice a difference in ergonomics.


At this time (and historically), I mostly use it as a laptop, but I have also used it as a desktop for long periods with an external monitor. As a laptop, I love that it's so tiny. It’s working very well so far... but I’m afraid that at some point, I’ll have to switch to Linux or OpenCore Legacy Patcher. I’m still on macOS 11 (Big Sur).

MS Office has already stopped updating, along with some other software (though not much, most still updates without issues). As long as Firefox keeps receiving updates for my system, most things will be fine.


MacOS up to version 13 runs well using OpenCore even on my 2010 iMac (I did upgrade it with a metal compatible GPU).

Looks like the only issue with your MacBook Air is there may be some metal issues. https://github.com/dortania/OpenCore-Legacy-Patcher/issues/1...

If those don't look like a problem for you, I'd definitely suggest giving it a try. MacOS 13 should give you at least 3 more years of use out of it.

Going beyond MacOS 13 I don't think is worth it. MacOS 14 is noticeably slower on my 2010 iMac, and there aren't any new features it can take advantage of anyways.


that's a very useful comment, thank you for sharing you experience with older hardware!


The battery life and performance improvements alone would be worth the upgrade to me at that point.


I have a 2015 myself and little things become impossible by design, like actually using the new Passwords app with shared data etc.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: