Hacker News new | past | comments | ask | show | jobs | submit | abletonlive's comments login

Just downvote them and move on. HN gets more like reddit everyday and you can tell it's the reddit sentiment in this thread that's taken over


I struggle with this a lot so thanks for the hint. I don't entirely agree that good points stand on their own. It's often easy to anticipate the criticism to your point. When arguments are text-based and responses have a tendency to span hours or days, it can be useful to short circuit the argument by just calling out the anticipated criticism. This of course can sometimes lead to comments such as yours, where we go off into a meta side-argument. Or, if you anticipate badly, you unintentionally put even more focus on the anticipated criticism than your original point.

> It's often easy to anticipate the criticism to your point.

You can head off a lot of criticism by not making your point competitive with other reasonable points. I.e. additive to understanding, not subtractive.

Otherwise, you are actually creating the competition between points that you wanted to avoid. And creating your own distractions from your own point.


This is a really cool concept but I unfortunately care too much about privacy to use you as the handler of all of my screen time data. I know that’s partially why y’all mention that it’s not for everybody but I would like to just say it is cool nonetheless!

> Physical switches on devices (much like older Chromebook devices had) used to opt out of the walled garden should be mandated by consumer protection regulations.

I don’t want to live in the same society as the person that wrote this asinine comment with this much confidence. We are just ideologically incompatible


How so? I understand the tension between freedom to tinker and consumer protection. It's OK to assign different values to either of them. And there are definitely ways to reconcile the two positions. Some of that will have to come through nuanced regulations.

For example, it could be regulated that if the flip is switched (or a fuse is blown irreversibly) on a device, responsibility for the device and its software fall entirely onto the owner. So if they get phished on an unprotected device and lose their life savings, it's entirely on them. Manufacturers and service providers have no obligation to support them.


Once you have enough power to legislate and enforce this, what's to stop a future administration from tightening the ratchet just a little bit further and forcing users to purchase TPM computers with unbreakable DRM and encrypted blobs running who knows what, and no ability for users to modify their system, change hardware or operating systems without either running afoul of the law or losing access to banking and insurance?

My comment (GGGP) was about regulating devices to require physical switches to allow the owner of the device to opt for freedom. I'm not sure where you got DRM-type stuff out of that.

I think efuses being blown by device manufacturers should be illegal.

I think bootloaders that don't allow the device owner to run whatever software they want should be illegal.

I think device owners should be permitted to repair their devices without losing functionality because of DRM embedded in the parts themselves.

I think a physical switch, exercisable only with physical access, should be present on locked-down devices to allow the owner to exercise their ownership over the device. If that means that "attestation" functionality breaks and that causes some third-party software to "break" so-be it.

(I think the problem with banks, etc, requiring "trusted" devices is also in the realm of consumer protection, probably in banking regulation. I haven't thought about it deeply.)


Think about it some more. I'm talking about the incremental increases in power coupled with unpredictable administration changes, and how each new increase in federal power creates multiple branches for slightly increasing power even more, until without realizing it, we've let our government slowly move the Overton window right where it needs to be for an authoritarian power grab and restriction of freedoms. We have to be extremely careful about the powers we give our governments, because they do not give them back without a fight, and they're always looking to expand their reach.

Well, you do realize that there are already a lot of laws covering these things, right? If you're this cynical, then you need to realize that stuff like what you describe could be legislated at any time. There's no real barrier.

Obviously, why do you think I'm raising awareness? Right-to-repair is a huge issue across multiple regions and industries, with uneven progress across the US.

In most HN LLM Programming discussion there's a vocal majority saying LLMs are useless. Now we have this commenter saying all they need to do is vibe and it all works out.

WHICH IS IT?


I downvoted your comment because of your first sentence. Your point makes is made even without it.

they lack the plumbing architecture for this.

yawn.

this is so reductive it's almost not even worth talking about. you can prove yourself wrong within 30 minutes but you choose not to.


Sorry, I don't follow. How do you define LLMs?

right because it's going to be $20 a month for everybody around the world. that's how the world works right?

This is a big issue in the short term but in the long term I actually think AI is going to be a huge democratization of work and company building.

I spend a lot of time encouraging people to not fight the tide and spend that time intentionally experimenting and seeing what you can do. LLMs are already useful and it's interesting to me that anybody is arguing it's just good for toy applications. This is a poisonous mindset and results in a potentially far worse outcome than over-hyping AI for an individual.

I am wondering if I should actually quit a >500K a year job based around LLM applications and try to build something on my own with it right now.

I am NOT someone that thinks I can just craft some fancy prompt and let an LLM agent build me a company, but I think it's a very powerful tool when used with great intention.

The new grads and entry level people are scrappy. That's why startups before LLMs liked to hire them. (besides being cheap, they are just passionate and willing to make a sacrifice to prove their worth)

The ones with a lot of creativity have an opportunity right now that many of us did not when we were in their shoes.

In my opinion, it's important to be technically potent in this era, but it's now even more important to be creative - and that's just what so many people lack.

Sitting in front of a chat prompt and coming up with an idea is hard for the majority of people that would rather be told what to do or what direction to take.

My message to the entry-level folks that are in this weird time period. It's tough, and we can all acknowledge that - but don't let cynicism shackle you. Before LLMs, your greatest asset was fresh eyes and the lack of cynicism brought upon by years of industry. Don't throw away that advantage just because the job market is tough. You, just like everybody else, have a very powerful tool and opportunity right in front of you.

The amount of people trying to convince you that it's just a sham and hype means that you have less competition to worry about. You're actually lucky there's a huge cohort of experienced people that have completely dismissed LLMs because they were too egotistical to spend meaningful time evaluating it and experimenting with it. LLM capabilities are still changing every 6 months-1 year. Anybody that has decided concretely that there is nothing to see here is misleading you.

Even in the current state of LLM if the critics don't see the value and how powerful it is mostly a lack of imagination that's at play. I don't know how else to say it. If I'm already able to eliminate someone's role by using an LLM then it's already powerful enough in its current state. You can argue that those roles were not meaningful or important and I'd agree - but we as a society are spending trillions on those roles right now and would continue to do so if not for LLMs


what does "huge democratization of work" even mean? what world do you people live in? the current global unemployment rate on my planet is around 5% so that seems pretty democratised already?


I've noticed that when people use the term "democratization" in business speak, it makes sense to replace it with "commodification" 99% of the time.

What I mean by that is that you have even more power to start your own company or use LLMs to reduce the friction of doing something yourself instead of hiring someone else to do it for you.

Just as the internet was a democratization of information, llms are a democratization of output.

That may be in terms of production or art. There is clearly a lower barrier for achieving both now compared to pre-llm. If you can't see this then you don't just have your head stuck in the sand, you have it severed and blasted into another reality.

The reason why you reacted in such a way is again, a lack of imagination. To you, "work" means "employment" and a means to a paycheck. But work is more than that. It is the output that matters, and whether that output benefits you or your employer is up to you. You now have more leverage than ever for making it benefit you because you're not paying that much time/money to ask an LLM to do it for you.

Pre-llm, most for-hire work was only accessible to companies with a much bigger bank account than yours.

There is an ungodly amount of white collar workers maintaining spreadsheets and doing bullshit jobs that LLMs can do just fine. And that's not to say all of those jobs have completely useless output, it's just that the amount of bodies it takes to produce that output is unreasonable.

We are just getting started getting rid of them. But the best part of it is that you can do all of those bullshit jobs with an LLM for whatever idea you have in your pocket.

For example, I don't need an army of junior engineers to write all my boilerplate for me. I might have a protege if I am looking to actually mentor someone and hire them for that reason, but I can easily also just use LLMs to make boilerplate and write unit tests for me at the same time. Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.

The junior engineer can also do this too, albeit in most cases less effectively.

That's democratization of work.

In your "5% unemployment" world you have many more gatekeepers and financial barriers.


Just curious what area you work in? Python or some kind of web service / Jscript? I'm sure the LLMs are reasonably good for that - or for updating .csv files (you mention spreadsheets).

I write code to drive hardware, in an unusual programming style. The company pays for Augment (which is now based on o4, which is supposed to be really good?!?). It's great at me typing: print_debug( at which point it often guesses right as to which local variables or parameters I want to debug - but not always. And it can often get the loop iteration part correct if I need to, for example, loop through a vector. The couple of times I asked it to write a unit test? Sure, it got a the basic function call / lambda setup correct, but the test itself was useless. And a bunch of times, it brings back code I was experimenting with 3 months ago and never kept / committed, just because I'm at the same spot in the same file..

I do believe that some people are having reasonable outcomes, but it's not "out of the box" - and it's faster for me to write the code I need to write than to try 25 different prompt variations.


A lot of python in a monorepo. Mono repos have an advantage right now because the LLM can pretty much look through the entire repo. But I'm also applying LLM to eliminate a lot of roles that are obsolete, not just using it to code.

Thanks for sharing your perspective with ACTUAL details unlike most people that have gotten bad results.

Sadly hardware programming is probably going to lag or never be figured out because there's just not enough info to train on. This might change in the future when/if reasoning models get better but there's no guarantee of that.

> which is now based on o4

based on o4 or is o4, those are two different things. augment says this: https://support.augmentcode.com/articles/5949245054-what-mod...

  Augment uses many models, including ones that we train ourselves. Each interaction you have with Augment will touch multiple models. Our perspective is that the choice of models is an implementation detail, and the user does not need to stay current with the latest developments in the world of AI models to fully take advantage of our platform.
Which IMO is....a cop out, a terrible take, and just...slimey. I would not trust a company like this with my money. For all you know they are running your prompts against a shitty open source model running on a 3090 in their closet. The lack of transparency here is concerning.

You might be getting bad results for a few reasons:

  - your prompts are not specific enough
  - your context is poisoned. how strategically are you providing context to the prompt? a good trick is to give the llm an existing file as an example to how you want it to produce the output and tell it "Do X in the style of Y.file". Don't forget with the latest models and huge context windows you could very well provide entire subdirectories into context (although I would recommend being pretty targeted still)
  - the model/tool you're using sucks
  - you work in a problem domain that LLMs are genuinely bad at
Note: your company is paying a subscription to a service that isn't allowing you to bring your own keys. they have an incentive to optimize and make sure you're not costing them a lot of money. This could lead to worse results.

see here for Cline team's perspective on this topic: https://www.reddit.com/r/ChatGPTCoding/comments/1kymhkt/clin...

I suggest this as the bare minimum for the HN community when discussing their bad results with LLMs and coding:

  - what is your problem domain
  - show us your favorite prompt
  - what model and tools are you using?
  - are you using it as a chat or an agent? 
  - are you bringing your own keys or using a service?
  - what did you supply in context when you got the bad result? 
  - how did you supply context? copy paste? file locations? attachments?
  - what prompt did you use when you got the bad result?
I'm genuinely surprised when someone complaining about LLM results provides even 2 of those things in their comment.

Most of the cynics would not provide even half of this because it'd be embarrassing and reveal that they have no idea what they are talking about.


But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?

So your critique of AI is that it can't read your mind and figure out what to do?

> But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?

I mean....i'm doing it and getting paid for it so...


Yes, because AGI is advertised(or reviled) as such. That you plug it in and it figures everything else out itself. No need for training and management like for humans.

In other words, did the AI actually replace you in this case? Do you expect it to? Because people clearly expect it, then we have such discussions as this.


You are incredibly foolish to get hung up on marketing promises and ignoring llm capabilities that are a reality and useful right now

good luck with that


Tell that to all these bloodbathers. I am trying it out myself and in touch with the reality.

You're trying it out with literally the expectation that it can read your mind and do what you want with no effort involved on your part.

So basically you're not trying it out. Please just put it down, you have nothing interesting to say here


Maybe. But are you aware that noone, at least in management, wants to hear "you must make the effort"?

> What I mean by that is that you have even more power to start your own company or use LLMs to reduce the friction of doing something yourself instead of hiring someone else to do it for you.

> Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.

this sounds like the death of employment and the start of plutocracy

not what I would call "democratisation"


> plutocracy

Well, I've said enough about cynicism here so not much else I can offer you. Good luck with that! Didn't realize everybody loved being an employee so much


not everyone is capable of starting a business

so, employee or destitute? tough choice


I spent a lot of time arguing the barrier to entry for starting one is lower than ever. But if your only options are employee or being destitute, I will again point right to -> cynicism.

Thanks for saying it out loud. I meet a lot of people like you that think the same way as part of my job and they aren't willing to say it out loud.

It's about protecting your work, even if an LLM can do it better.

The only way an LLM can devalue your work is if it can do it better than you. And I don't just mean quality, I mean as a function of cost/quality/time.

Anyway, we can be enemies I don't care - I've been getting rid of roles that aren't useful anymore as much as I can. I do care that it affects them personally but I do want them to be doing something more useful for us all whatever that may be.


lol “I do care, but not enough to actually care”


Caring doesn't mean that you stop everything you're doing to address someone's needs. That's a pretty binary world if it was the case and maybe a convenient way to look at motives when you don't want nuance.

Caring about climate change doesn't mean you need to spend your entire life planting trees instead of doing what you're doing.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: