Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I hear this a lot but I don't find it convincing. If I know enough to know if the answer is good or bad, why would I be asking a chat bot? We could say this problem is somewhat analogous to sorting good StackOverflow answers from bad ones. But I feel that there are things like writing and presentation style (not to mention the votes and comments) that could tip me off to a bad answer that are absent when every answer is in the same voice.


> If I know enough to know if the answer is good or bad, why would I be asking a chat bot?

If you are trying to write a blog/essay/etc on a topic, it can help with “writer’s block”-you may know a lot about the topic already, but struggle with putting that understanding into words. And then it isn’t hard to filter ChatGPT’s output based on your pre-existing knowledge.

Sometimes I don’t know the answer to the specific question I am asking it, but still have enough background knowledge of the topic to pick up when it is likely to be “hallucinating”


Does it help writers block by writing for you? If so, are you still writing?


What if you take what it writes as a starting point, adding to it, removing bits from it, rewording it, etc? Surely then you are indeed “writing”


Eh I don’t think that’s writing. But the very least would be for everyone to list chatgpt as a coauthor.


No, then you are plagiarising.


What counts as “plagiarism” depends on the context. It is (in most cases) unethical to present the work of another human being as your own, but what about an AI? In an educational setting, it would be unethical to turn in an AI’s work as your own. But if a professional programmer uses GitHub Copilot, are they obliged to acknowledge Copilot’s contribution? If I have a personal blog, and ChatGPT helps me write a post, am I obliged to acknowledge ChatGPT’s contribution?


I think the bigger picture to me is something like are you being disingenuous rather than are you technically plagiarizing. Thought experiment: if I tell people I’m great at multiplication and I don’t tell them I have an earpiece / mic with someone using a calculator on the other end. If I convince these people I have a great mind and they respect my math skills, have I not fooled them? As for the copilot argument, I’m not convinced it shouldn’t be required to cite.


I don’t think the purpose of most writing is convincing people you’re a great writer.


Me either. It’s to convince you’re a great intellect. That or to record a historical archive.


People write for many different reasons. Sometimes it is because they believe in an idea (political, ethical, philosophical, religious, etc), and their main objective is to convince others to believe in that idea too-what you may come to think of their abilities as an author is rather besides their point.


Somewhat of an open question really. I think for all of blogging we have concluded that the words in your blog post were in fact written by you. I see two reasons plagiarism is bad: both and injustice to whom wrote the original but also a false indicator to your cognitive and creative ability.


> I think for all of blogging we have concluded that the words in your blog post were in fact written by you.

If "we" concluded that it was a little naive since ghostwriting predates the blog.


I have to give you that. That is a good point. I definitely am or was naive enough to think personal blogs were personally written.

Man, now I’m questioning how ethical ghostwriting is. Not sure that was your intention but that is thought provoking.


Well, I think it is more challenging than that. Even if the author writes the piece himself, most professional writing is subject to editing, which can substantially change the finished product. The one thing we can be reasonably, if not entirely, certain of is that the piece is one the author is comfortable endorsing as his own.


You can argue plagiarist are comfortable endorsing as their own but that seems beside the point.


Early days but Iam finding OpenAI better than StackOverflow for reminding me of libraries and syntax of programming languages

I have tried it with:rust, dart, and swift


I can see that being useful, though I’d prefer something integrated with my development environment along the lines of Copilot.


It’s easier to prove an answer correct using google than it is to find one from scratch.

ChatGPT can often show you when the premise of the question is wrong or assumes things that are not true. Making it easy to search on google after.


Exactly. I assume Google or others will combine these together in some way soon. It is a pretty obvious win.


> If I know enough to know if the answer is good or bad, why would I be asking a chat bot?

I mean, taken to the extreme, I can probably read through the source of VLC and know that it's correct, given enough years to read it and study video compression standards etc. Does that mean I don't get use out of someone else having written VLC for me?

Knowing something is right and producing it are completely different things. You might be thinking of ChatGPT too narrowly, as a simple question-answer thing, but even now you can ask it to write code that will save you time, and scale it up by a few factors and it's doing the equivalent of writing new libraries for you. (Probably many years before it can write VLC for you, if ever.)


This is not the same thing at all. You know whether a tool like VLC works because it has a pretty well-defined scope for "works": it plays the video or audio file you clicked on.

If you're asking ChatGPT to teach you something, you have no such easy verification you can do: you essentially need to learn the same material from another source in order to cross-check it. Obviously this is easy for small factual questions. If I ask ChatGPT the circumference of the Earth, I can quickly figure out whether it's reliable or not on that point. But at the other extreme if I ask it to do a music theory analysis of the Goldberg Variations, it's going to take me about as much work to validate the output as to have just done the analysis myself.


I don’t think learning in many situations is as black and white as you assert


I'd suggest just trying it yourself for a bit to see if you can find any use for the tool. If not, that isn't a problem either - I suspect it is completely useless in some domains.


I amused myself by trying to think of funny things for it to write but it was a bit hit-and-miss and now they want me to pay to fiddle with it more.


I remember playing around with it first and doing similar things with the openai playground. It is amusing but the novelty of this type of usage wears off quickly.

I mostly use it for programming type work now. Write a little snip of code (personal projects only) or tell me how to use a library. I also use it to teach me things. I find it incredibly useful for this.

ChatGPT is still free for me but it is glitchy at times. I would definitely pay the $20/ month one it is available in my region.


I know what a clean house looks like, but I'd still love a machine to clean my house. CatGPT's purpose is not to understand complex ideas, it is to do the tedious task of joining disparate such ideas together in an intelligible way. Much faster to have it do 95% of the work and you only need to modify the 5% that's bad, then for you to do 100% yourself.


That’s a bad analogy. It would actually rather be like: I know what a clean house looks like, but I’d still love a machine to describe it to me.

However, I think you are somewhat on the right track as “AI” shows us which types of work can be removed all together. I think, if “AI” can do it, it’s not necessary work to begin with. Eg. instead of “AI” doing formal correspondence for us, we could then just have an interface to exchange raw information directly, as the human element is lost anyway.


I don't think you understand how analogies work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: