One of the much-touted aspects is that it'll create programs for you (or aspects of programs, if you prefer). Eventually, one suspects, you can specify decent requirements and get a significant volume of code that realizes those.
Do you deploy it? The overheated hype suggests, "Ship it!" More measured people would say that you test it in a sandbox environment. I've heard some say that they'd review the code in addition to testing. The end result is something running in an end user or customer's computer.
If your program goes down, does ChatGPT provide support? No, of course not. You'll need people to troubleshoot and resuscitate the program. (Or maybe it'll be self-healing, smh)
If a bug arises (can a bug even happen in ChatGPT code?) then you'll need someone to verify the bug; to address the issue with the end user or customer; to re-prompt with the additionally specified requirement (or I suppose ChatGPT will propose the requirement after a prompt indicating the bug in this fantasy); and then to re-deploy the program.
If it's an ecommerce site and the program has a security vulnerability (if that's even possible, smh), then you need someone to recognize the intrusion, determine the vulnerability, re-prompt with the vulnerability specified, and deploy the updated version. Replace "security vulnerability" with "fraud transaction" and repeat.
I can hear your question, "how is that different from today since we experience all of the above with people," and the immediate answer is "accountability." You can't fire ChatGPT or even yell at it. It's as if you slide requests under a closed door and get stuff back the same way.
The whole setup requires trust—same as today—except that it's a full-throated trust. You either succumb to ¯\_(ツ)_/¯ or you spend 2x (or more) verifying the result. (I'll just throw out some of my other concerns without elaboration: a) there's more to deploying code then just generating it, b) much of modern programming is integration, and c) the training models will constantly evolve so the same prompt at time x might yield a very different program at time y.)
Do you deploy it? The overheated hype suggests, "Ship it!" More measured people would say that you test it in a sandbox environment. I've heard some say that they'd review the code in addition to testing. The end result is something running in an end user or customer's computer.
If your program goes down, does ChatGPT provide support? No, of course not. You'll need people to troubleshoot and resuscitate the program. (Or maybe it'll be self-healing, smh)
If a bug arises (can a bug even happen in ChatGPT code?) then you'll need someone to verify the bug; to address the issue with the end user or customer; to re-prompt with the additionally specified requirement (or I suppose ChatGPT will propose the requirement after a prompt indicating the bug in this fantasy); and then to re-deploy the program.
If it's an ecommerce site and the program has a security vulnerability (if that's even possible, smh), then you need someone to recognize the intrusion, determine the vulnerability, re-prompt with the vulnerability specified, and deploy the updated version. Replace "security vulnerability" with "fraud transaction" and repeat.
I can hear your question, "how is that different from today since we experience all of the above with people," and the immediate answer is "accountability." You can't fire ChatGPT or even yell at it. It's as if you slide requests under a closed door and get stuff back the same way.
The whole setup requires trust—same as today—except that it's a full-throated trust. You either succumb to ¯\_(ツ)_/¯ or you spend 2x (or more) verifying the result. (I'll just throw out some of my other concerns without elaboration: a) there's more to deploying code then just generating it, b) much of modern programming is integration, and c) the training models will constantly evolve so the same prompt at time x might yield a very different program at time y.)