Hacker News new | past | comments | ask | show | jobs | submit login

I predict it'll all come crashing down once people start having skin in the game. (Right now, it's a toy, the source for a million overheated news articles and opinions.)

ChatGPT can pass the Bar. Okay, have it draw up a contract and have the parties sign it—skin in the game. When an omitted comma can cost millions[1], what will an LLM's hallucinations wrought?

[1] https://kagi.com/search?q=missing+comma+millions&r=us&sh=QDQ...




Agreed. I don't think it will necessarily crash, but there will be a serious reckoning for any business owner who thinks they can replace critical roles with AI. We're a long way off from that, if ever.

In the short term, I think AI will be most useful in areas where (a) indeterminate results are acceptable, and (b) the consequences of a "mistake" are either non-existent or negligible.

As humans are imperfect, there will undoubtedly be many poorly-conceived product decisions to use AI prematurely. I do think it will be entertaining to watch.


This is still massively problematic for society and can represent a hollowing out of low to middle skill workers. You know the ones that are low paid and tend to have poor legal and governmental representation while at the same time experiencing hate from the remaining taxpayers for being slackers.


For sure. The translation industry is basically dead, a good chunk of copywriting and marketing is on its way out, and I'm sure a slew of other industries are going to be nearly eradicated. The effect it's going to have on the economy will be painful.


Sorry, "crash" is probably hyperbolic. "Deflate" is maybe better. If you can't trust the work output, then paying 10¢ instead of an hour of a FTE's time for it is no consolation.


If by "skin in the game" you mean not having any human oversight, then sure there will be problems. But we already are seeing ChatGPT used to deliver real value. For example, I've used it to help me take care of my parents-in-law. Help answer questions, interpret test results, etc... It's been great and paid for itself 100X (if not 1000X) already.

Also started using it to diagnose a car issue I had and it helped me go down a path, which I then had a follow up question -- and it nailed the issue. And I know nothing about cars.

And at work people are using to generate starts for various communications.

It doesn't have to go from toy to a task where a comma costs you millions. There's a lot in between that.


One of the much-touted aspects is that it'll create programs for you (or aspects of programs, if you prefer). Eventually, one suspects, you can specify decent requirements and get a significant volume of code that realizes those.

Do you deploy it? The overheated hype suggests, "Ship it!" More measured people would say that you test it in a sandbox environment. I've heard some say that they'd review the code in addition to testing. The end result is something running in an end user or customer's computer.

If your program goes down, does ChatGPT provide support? No, of course not. You'll need people to troubleshoot and resuscitate the program. (Or maybe it'll be self-healing, smh)

If a bug arises (can a bug even happen in ChatGPT code?) then you'll need someone to verify the bug; to address the issue with the end user or customer; to re-prompt with the additionally specified requirement (or I suppose ChatGPT will propose the requirement after a prompt indicating the bug in this fantasy); and then to re-deploy the program.

If it's an ecommerce site and the program has a security vulnerability (if that's even possible, smh), then you need someone to recognize the intrusion, determine the vulnerability, re-prompt with the vulnerability specified, and deploy the updated version. Replace "security vulnerability" with "fraud transaction" and repeat.

I can hear your question, "how is that different from today since we experience all of the above with people," and the immediate answer is "accountability." You can't fire ChatGPT or even yell at it. It's as if you slide requests under a closed door and get stuff back the same way.

The whole setup requires trust—same as today—except that it's a full-throated trust. You either succumb to ¯\_(ツ)_/¯ or you spend 2x (or more) verifying the result. (I'll just throw out some of my other concerns without elaboration: a) there's more to deploying code then just generating it, b) much of modern programming is integration, and c) the training models will constantly evolve so the same prompt at time x might yield a very different program at time y.)


There are a ton of places LLMs are already providing value today. Some of the biggest are turning unstructured data and user intent into structured data, helping with writing (no replacing), certain tasks in software development (it is often much faster to use ChatGPT as a reference or guide than search google and sift through the ever decreasing in quality results).

I'm paying now and want to pay more, if only they would give me API access to the most advanced models. GPT-4 is much better and Google will have a comparable model soon (tm?)


A first year in a law firm doesn't just draw up a contract and everyone blindly signs it. A first year at a law firm is an expensive GPT-4.


Good lord! This is your view of a human being that has completed 7–10 years of college after 12 or so years of education, passed a difficult Bar exam, interviewed with employees of a law firm, and experienced decades of perceptual, conceptual, and existential interaction with reality and society? "Expensive GPT-4"

(This is where my "AI" fear mostly comes from: glib assessments of its ability coupled with devaluing of actual human intelligence. That, and a singularity-like cult treating it as oracular.)


One may reasonably respond:

''' This is your view of a piece of technology which cost hundreds of millions to train and thousands of person years of research before that?

This is where my fear comes from: glib assessments of this technology coupled with unwavering belief that human intelligence is untouchable. '''


You could respond that way. I'm not sure where I devalued ChatGPT in my response in a similar fashion that you dismissed a "first year." (I will cop to regarding humans and their needs as superior to any technology.)

In fact, my entire "skin in the game" criticism is based on the idea that it _will_ be used that way, that it's powerful enough that people will invest into it too much hope, foresight, and insight. I have the utmost respect for the work being done and the highly-delimited benefit it provides.

I just don't regard it as "intelligent" nor do I believe it is the path to AGI.


It's already being used that way, because it's hugely valuable. People with skin in the game want to save money. Some partners at firms are concerned that it will be used so extensively that no one will be able to become a good lawyer because one needs to go through the stage of doing a lot of grunt work for it to all sink in. A partner at a firm commented to me ~ "this is amazing. I'd never have been able to become the lawyer I am without doing what this does though. I wonder what will happen."




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: