Depends a lot. Use it for one off scripts, particularly for anything Microsoft 365 related (expanding Sharepoint drives, analyzing AWS usage, general IT stuff). Where there is a lot of heavy context based business logic it will fail since there’s too much context for it to be successful.
I work in custom software where the gap in non-LLM users and those who at least roughly know how to use it is huge.
It largely depends on the prompt though. Our ChatGPT account is shared so I get to take a gander at the other usages and it’s pretty easy see: “okay this person is asking the wrong thing”. The prompt and the context has a major impact on the quality of the response.
In my particular line of work, it’s much more useful than not. But I’ve been focusing on helping build the right prompts with the right context, which makes many tasks actually feasible where before it would be way out of scope for our clients budgets.
... Perhaps I am just not getting enough traffic yet for it to work. I had signed up for Amazon affiliates but they closed my account due to lack to sales.
This is written by Kevin Bryan from University of Toronto. He has good tweets on the economics of AI, too (https://x.com/Afinetheorem).
My recap of the PDF is something like:
1. There are good books about the near-term economics as AI.
2. There aren't many good books about "what if the AI researchers are right" (e.g. rapid scientific acceleration) and the economic and political impacts of those cases.
3. The Second Machine Age: Digital progress boosts the bounty and widens the spread, more relative inequality. Wrong on speed (e.g. self driving tech vs regulatory change).
4. Prediction Machines: AI = cheaper prediction. Which raises the value of human judgement, because that's a complement.
5. Power and Prediction: Value comes when the whole system is improved not just from smaller fixes. Electrification's benefits arrived when factories reorganized, not just when they added electricity to existing layouts. Diffusion is slow because things need to be rebuilt.
6. The Data Economy: Data is a nonrivalrous asset. As models get stronger and cheaper, unique private data grows in relative value.
7. The Skill Code: Apprenticeship pathways may disappear. E.g. survival robots prevent juniors getting practice reps.
8. Co-Intelligence: Diffusion is slowed by the jagged frontier (AI is spiky). Superhuman at one thing, subhuman at another.
9. Situational Awareness: By ~2027, $1T/yr AI capex spend, big power demand, and hundreds of millions of AI researchers getting a decade of algo progress in less than a year. (Author doesn't say he agrees, but says economists should analyze what happens if it does)
10. Questions: What if the AGI-pilled AI researchers are right, what will the economic and policy implications be?
This all sounds like it has been covered in detail by the "AI as a Normal Technology"[1][2] guys (formerly AI Snake Oil - they decided they preferred to engage rather than just be snarky).
Invention vs innovation vs diffusion - this is all well-known stuff.
It's a completely different episteme than the one IABIED guys have ("If Anyone Builds It, Everyone Dies").
I don't think there can be any meaningful dialogue between the two camps.
It feels kind of crazy to go from "AI is 'only' something like snake oil" to "AI is 'only' something like fire, metallurgy, agriculture, writing, or electricity" without some kind of mea culpa of what was wrong about their previous view. That's a huge leap to more or less imply "well AI is just going to be comparable to invention of fire. No biggie. Completely compatible with AI as snake oil."
I think the point is more to posit that our civilization will come to normalize AI as a ubiquitous tool extremely quickly like the other ones mentioned, and to analyze it from that perspective. The breathless extremist takes on both sides are a bit tiresome.
How is it tautological? Some form of it is the very basis of atheism.
Doomsday timelines have lower numbers of observers. In all timelines where you are no longer an observer,i.e. all current doomsday timelines, your observation has ceased.
To repeat myself: if there is no life to experience doom, then whatever happens still happens, but it is not "doom". In other words, doom is a moral construct. Morality only exists when a being draws a line between "good" and "bad", it is not a real thing that exists.
Merely saying something does not make it so. I feel like you're too far off what I would consider a thread of conversation to continue this, I wish you well though.
Thanks for the note, yes, I suppose this is a format that's a bit hard to engage with. I'm not the author, but I've interacted with them and think they're very sharp!
2) effort/displeasure of doing the thing and the result
One way to increase #1 is to make it more socially involved. If you're working on a project solitarily, start going to events and talking about it with people, or write about it online. Humans are massively socially motivated.
For #2, one way to address this is with emotional processing. Often something is unpleasant because it reminds of something we didn't like from the past. So really digesting those emotions can allow the expected displeasure to fade because we kind of integrate it into our brains/bodies. But the key for this is that it has to be emotional processing, not intellectual processing.
Yes, in my experience the social part of this is not so much the carrot, but the stick. If I don't do this thing, I will look lazy to this person or this person will be disappointed or inconvenienced, etc.
reply