"We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact."
I actually disagree with this pretty fundamentally. I've never seen hacker culture as defined by "craftsmanship" so much as about getting things done. When I think of our culture historically, it's cleverness, quick thinking, building out quick and dirty prototypes in weekend "hackathons", startup culture that cuts corners to get an MVP product out there. I mean, look at your URL bar: do you think YC companies are prioritizing artisanal lines of code?
We didn't trade craftsmanship for "Business Impact". The latter just aligns well with our culture of Getting Shit Done. Whether it's for play (look at the jank folks bring out to the playa that's "good enough") or business, the ethos is the same.
If anything, I feel like there has been more of an attempt to erase/sideline our actual culture by folks like y'all as a backlash against AI. But frankly, while a lot of us scruffy hacker types might have some concerns about AI, we also see a valuable tool that helps us move faster sometimes. And if there's a good tool that gets a thing done in a way that I deem satisfactory, I'm not going to let someone's political treatise get in my way. I'm busy building.
>I can for example iterate on the sequence in my initial post and make it novel by writing down more and more disparate concepts and deleting the concepts that are closely associated. This will in the end create a more novel sequence that is not associated in my brain I think.
This seems like something that LLMs can do pretty easily via CoT.
As a fun test, I asked ChatGPT to reflexively given me four random words that are not connected to each other without thinking. It provided: lantern, pistachio, orbit, thimble
I then asked it to think carefully about whether there were any hidden relations between them, and to make any changes or substitutions to improve the randomness.
With 60 minutes of talk time included, I kind of get the impression this isn't designed so that you can hand it to your kid and let them spend the day talking to Santa. I'm assuming the idea is that they do this in lieu of writing to Santa, and you would supervise the experience.
Also, if your eight year old is trying to jailbreak Santa, you might have bigger issues to worry about.
>If you believe computers can think then you must be able to explain why a chain of dominoes is also thinking when I convert an LLM from transistor relay switches into the domino equivalent.
Sure, but if you assume that physical reality can be simulated by a Turing machine, then (computational practicality aside) one could do the same thing with a human brain.
Unless you buy into some notion of magical thinking as pertains to human consciousness.
No magic is necessary to understand that carbon & silicon are not equivalent. The burden of proof is on those who think silicon can be a substitute for carbon & all that it entails. I don't buy into magical thinking like Turing machines being physically realizable b/c I have studied enough math & computer science to not be confused by abstractions & their physical realizations.
I recently wrote a simulation of water molecules & got really confused when the keyboard started getting water condensation on it. I concluded that simulating water was equivalent to manifesting it in reality & immediately stopped the simulation b/c I didn't want to short-circuit the CPU.
There's a marked difference between running a Twitter-like application that scales to even a few hundred thousand users, and one that is a global scale application.
You may find quickly that, network effects aside, you would find yourself crushed under the weight and unexpected bottlenecks of that network you desire.
Agreed entirely but not sure that's relevant in what I'm replying to.
> we are at the point where you can install an app, tell the agent you want something that works exactly the same and just let it run until it produces it
That won't produce a global-scale application infrastructure either, it'll just reproduce the functionality available to the user.
>And you can't ask "why" about a decision you don't understand (or at least, not with the expectation that the answer holds any particular causal relationship with the actual reason).
To be fair, humans are also very capable of post-hoc rationalization (particularly when they're in a hurry to churn out working code).
This piece is rather puzzling. The author is essentially claiming that no one can trust their own judgement on AI (or anything?), and that the lack of scientific research means we should be in a "wait and see" pattern.
I actually disagree with this pretty fundamentally. I've never seen hacker culture as defined by "craftsmanship" so much as about getting things done. When I think of our culture historically, it's cleverness, quick thinking, building out quick and dirty prototypes in weekend "hackathons", startup culture that cuts corners to get an MVP product out there. I mean, look at your URL bar: do you think YC companies are prioritizing artisanal lines of code?
We didn't trade craftsmanship for "Business Impact". The latter just aligns well with our culture of Getting Shit Done. Whether it's for play (look at the jank folks bring out to the playa that's "good enough") or business, the ethos is the same.
If anything, I feel like there has been more of an attempt to erase/sideline our actual culture by folks like y'all as a backlash against AI. But frankly, while a lot of us scruffy hacker types might have some concerns about AI, we also see a valuable tool that helps us move faster sometimes. And if there's a good tool that gets a thing done in a way that I deem satisfactory, I'm not going to let someone's political treatise get in my way. I'm busy building.
reply