Hacker Newsnew | past | comments | ask | show | jobs | submit | cjbarber's commentslogin

Not the creator, but I saw this and thought it looked interesting.


What are you trying to use LLMs for and what model are you using?

Depends a lot. Use it for one off scripts, particularly for anything Microsoft 365 related (expanding Sharepoint drives, analyzing AWS usage, general IT stuff). Where there is a lot of heavy context based business logic it will fail since there’s too much context for it to be successful.

I work in custom software where the gap in non-LLM users and those who at least roughly know how to use it is huge.

It largely depends on the prompt though. Our ChatGPT account is shared so I get to take a gander at the other usages and it’s pretty easy see: “okay this person is asking the wrong thing”. The prompt and the context has a major impact on the quality of the response.

In my particular line of work, it’s much more useful than not. But I’ve been focusing on helping build the right prompts with the right context, which makes many tasks actually feasible where before it would be way out of scope for our clients budgets.


For more economics of AGI, see also:

This tweet recapping this paper https://x.com/lugaricano/status/1969159707693891972

This tweet with recaps of various papers presented at "The Economics of Transformative AI" by NBER in Palo Alto a few weeks ago https://x.com/lugaricano/status/1968704695381156142



Frequeently submitted, rarely engaged: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Not quite the same as this submission, but a better place to start for most people.


> I simply don't see the potential in affiliate links.

What makes you not see that potential here? Seems like the obvious answer, at least if this is aimed at being a nice small profitable side project.


... Perhaps I am just not getting enough traffic yet for it to work. I had signed up for Amazon affiliates but they closed my account due to lack to sales.

I'm not the author, but I like this list.

This is written by Kevin Bryan from University of Toronto. He has good tweets on the economics of AI, too (https://x.com/Afinetheorem).

My recap of the PDF is something like:

1. There are good books about the near-term economics as AI.

2. There aren't many good books about "what if the AI researchers are right" (e.g. rapid scientific acceleration) and the economic and political impacts of those cases.

3. The Second Machine Age: Digital progress boosts the bounty and widens the spread, more relative inequality. Wrong on speed (e.g. self driving tech vs regulatory change).

4. Prediction Machines: AI = cheaper prediction. Which raises the value of human judgement, because that's a complement.

5. Power and Prediction: Value comes when the whole system is improved not just from smaller fixes. Electrification's benefits arrived when factories reorganized, not just when they added electricity to existing layouts. Diffusion is slow because things need to be rebuilt.

6. The Data Economy: Data is a nonrivalrous asset. As models get stronger and cheaper, unique private data grows in relative value.

7. The Skill Code: Apprenticeship pathways may disappear. E.g. survival robots prevent juniors getting practice reps.

8. Co-Intelligence: Diffusion is slowed by the jagged frontier (AI is spiky). Superhuman at one thing, subhuman at another.

9. Situational Awareness: By ~2027, $1T/yr AI capex spend, big power demand, and hundreds of millions of AI researchers getting a decade of algo progress in less than a year. (Author doesn't say he agrees, but says economists should analyze what happens if it does)

10. Questions: What if the AGI-pilled AI researchers are right, what will the economic and policy implications be?


This all sounds like it has been covered in detail by the "AI as a Normal Technology"[1][2] guys (formerly AI Snake Oil - they decided they preferred to engage rather than just be snarky).

Invention vs innovation vs diffusion - this is all well-known stuff.

It's a completely different episteme than the one IABIED guys have ("If Anyone Builds It, Everyone Dies").

I don't think there can be any meaningful dialogue between the two camps.

1. Substack: https://www.normaltech.ai/ book: https://www.normaltech.ai/p/starting-reading-the-ai-snake-oi...

2. "Normal technology" like fire, metals, agriculture, writing, and electricity are normal technologies.


It feels kind of crazy to go from "AI is 'only' something like snake oil" to "AI is 'only' something like fire, metallurgy, agriculture, writing, or electricity" without some kind of mea culpa of what was wrong about their previous view. That's a huge leap to more or less imply "well AI is just going to be comparable to invention of fire. No biggie. Completely compatible with AI as snake oil."


I think the point is more to posit that our civilization will come to normalize AI as a ubiquitous tool extremely quickly like the other ones mentioned, and to analyze it from that perspective. The breathless extremist takes on both sides are a bit tiresome.


The AI hype was 1000% GDP growth per annum. That was crazy. The "snake oil" label was in reaction to that.

Anyway, you are shooting the messenger by downvoting me. Thanks for showing us all how intelligent you are.


Eh. Downvote wasn't me.

But I'd be curious if you could find a quote from anyone for 1000% GDP growth per annum.


If AI researchers are wrong they're gonna have a lot of explaining to do.


TBH its far more likely they are wrong than right.

Investors are incredibly overzealous to not miss out on what happened with certain stocks of the personal computing, web 2.0 and smartphone diffusion.


There's a certain anthropic quality to the idea that if we lived in a doomsday timeline we'd be unlikely to be here observing it.


Humanist, maybe. The anthropic argument is tautological: nothing is a doomsday without there being someone for whom the scenario spells certain doom.


How is it tautological? Some form of it is the very basis of atheism.

Doomsday timelines have lower numbers of observers. In all timelines where you are no longer an observer,i.e. all current doomsday timelines, your observation has ceased.


To repeat myself: if there is no life to experience doom, then whatever happens still happens, but it is not "doom". In other words, doom is a moral construct. Morality only exists when a being draws a line between "good" and "bad", it is not a real thing that exists.


Merely saying something does not make it so. I feel like you're too far off what I would consider a thread of conversation to continue this, I wish you well though.


they'll just move onto the next grift

quantum? quantum AI? quantum AI on the blockchain?


Quantum AI is definitely an existing research topic.

Not aware of Quantum Blockchain just yet, though.



Alright, we're doomed.

(writing this as someone who works in quantum)


Thanks for the note, yes, I suppose this is a format that's a bit hard to engage with. I'm not the author, but I've interacted with them and think they're very sharp!


My observation is it's an equation between:

1) reward/incentive/expected good feelings

2) effort/displeasure of doing the thing and the result

One way to increase #1 is to make it more socially involved. If you're working on a project solitarily, start going to events and talking about it with people, or write about it online. Humans are massively socially motivated.

For #2, one way to address this is with emotional processing. Often something is unpleasant because it reminds of something we didn't like from the past. So really digesting those emotions can allow the expected displeasure to fade because we kind of integrate it into our brains/bodies. But the key for this is that it has to be emotional processing, not intellectual processing.


Don’t forget 3) consequences for not doing it


Yes, in my experience the social part of this is not so much the carrot, but the stick. If I don't do this thing, I will look lazy to this person or this person will be disappointed or inconvenienced, etc.

Probably not a healthy outlook!


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: