Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given that ChatGPT was just released last fall and it was expected to revolutionize practically everything, isn't it still telling that usage is falling? Where are the scientists, lawyers, doctors, office workers etc? You'd expect a slowing in the rate of growth, not shrinking.

I don't know, if high school and undergrad students being off for the summer is enough to shrink your app maybe it's not all that?

Edit: I understand there are new tools coming, but most of them aren't out yet. GPT4 was just released. For the most part, if you want to play with AI, ChatGPT is it. If they're really experiencing a decline in users that's not great for them.



I see people using ChatGPT, Bard, etc. to answer questions on mailing list threads. It'll be a long thread of thoughtful human written responses, and then someone will come along and say "I asked Bard what it thought about this issue, here's the response". And then out comes five paragraphs of text that I might as well not even bother reading, as it often contains falsehoods that I have to waste time looking up and double-checking. It bothers me so much that these LLMs don't have any inherent understanding of truth or facts, so the only thing they're really good at is writing fiction (e.g. I hear they're pretty good as a DM's assistant to flesh out flavor/scenario descriptions and such). The Midjourney generated images fall into the same fiction category and are generally pretty good too.

But for actual non-fiction usage, you have to spend so much time triple-checking everything they say to ensure that it isn't simply complete nonsense. What's the point?


The point is you need to collaborate with the LLM, not just expect it to provide fully-formed answers. I've saved countless hours of work over the past few months using GPT-4 to help me solve problems. One general framework can be to use it to help you google a solution instead of a problem. Googling a problem gives you all kinds of results, which you then need to dig through and understand in order to evaluate whether there's something that can be adapted to your use case. On the other hand, you can ask GPT-4 for solutions to the problem, and then google those, which cuts the process down significantly. Of course that's only one example. The pattern though is that you need to play to its strengths. It's not particularly smart, but it's very knowledgeable. Just like a human though, you can't expect it to perfectly regurgitate all the details. You use it for direction, and then refine from there, through conversation with the LLM as well as external research.


It really is astonishing how much you can get done this way. I've been setting up a home lab for myself, and the answers Gpt4 gives are miles ahead of the stack overflow results or documentation of the apps or whatever else. Rarely (very rarely) it will give me a wrong answer, but then I paste in the error message or describe the problem I had and it almost always comes up with the correct answer 2nd try. Final step is asking where I might learn more because it's not working, and gpt always gives me a better link than google.

I'm convinced the people who say it's nothing but a BS machine have never tried to use it step by step for a project. Or they tried to use it for a project most humans couldn't do, and got upset when it was only 95% perfect.


I disagree with that. It's very useful writing boilerplate and documentation, but two third the time I'm in front of a bug I'm to lazy to understand and ask ChatGPT, with context and all, the answer is wrong. I can fiddle with it to reduce that to a third of the time, but in the end, only the questions that are really, really hard to figure on your own are left.

Still, it's way better and more efficient than Google. Less than not being lazy and using my two braincells tbh.

My newest use is

Hello, i' m working on X, I use Y tech, my app do Z and I want to implement W. Can you provide a plan on how and where to start?


I agree with this. This is my primary use as a new analyst. Weird things that would take lots of time to dig through stack overflow to find, I can find pretty quickly if I feed it the parameters I’m working within, and what I’m trying to get to. Usually it just fills in the gap that Google was doing before, but much better in my opinion.


What tech stack(s) do you work with?


Sorry i'm a bit late. Depends. Professionally, its a mix a python, typescript (those i practically never use ChatGPT for, or rather, i use it for questions i usually ask google/reddit/SO), terraform/terragrunt on AWS with some Cisco config and some other hardware stack i don't remember but that require custom terraform providers. I automate the deployment of the hardware, so i think writing custom providers and terraform is roughly a third of what i do and i cannot use ChatGPT for that, its output is way too bad.

Personnally, a lot of bash, C, AWK at the moment (typescript + html/css until last april, now i'm back to the basics). The figure i gave in my post were more for that.

The last time i used it was yesterday, i wanted to hack something on an old game i used steam+proton for. I knew it was a weird Wineprefix, so i asked ChatGPT for it, i might have asked poorly, but after fiddling, i had the response (tbh i had to look how to get the game ID, so in the end i lost more time than not), then when it still didn't work, cause the path was shit, i entered all necessary context in ChaGPT4, and it couldn't find the easy "USER=steamuser" env variable to add before launching Wine. I stopped after 10 minutes, looked into a example Wine cfg file, understood the issue and fixed the problem myself.

I mean, it's probably good for really basic stuff, so it could have helped me when i was starting, but 80% of the stuff i code automatically without really thinking about it, and when i have to stop to think, ChatGPT isn't helping. Also tbh, VSCode is really, really good and fix my old, time-consuming task of "what's this argument again?"


Oh come on. I fed Unreal c++ engine code to ChatGPT4 and it couldn't understand inheritence in Slate classes and therefore kept offering me the same broken solution for a parameter with the wrong type.

The Unreal engine code is documented and publicly avaiable for OpenAI to ingest and it still gets the basics wrong.

I wasted hours trying to get it to explain to me what I didn't know, if it doesn't understand the internals of Unreal, I have no hope for it on bigger and better codebases.

It doesn't parse, it doesn't explain, it does not grok. It guesses at best and the blood sucking robot-horse is not telling the truth.


>It doesn't parse, it doesn't explain, it does not grok. It guesses at best and the blood sucking robot-horse is not telling the truth.

In my experience with coding (I've only done javascript and python myself) you have to tell it to explain and grok. It takes on the role you give it. Even just saying something like "you are a professional unreal developer specializing in C++, I am your apprentice writing code to (x). I want you to parse the following code in chunks, and tell me what might be wrong with it" before typing your prompt can help the output immensely. It starts to parse things because it's taken on the role of a teacher.

People love to hate on the idea of "prompt engineering" but it really is important how you prime the thing before asking it a question. The other thing I do is feed it the code slowly, and in logical steps. Feeding it 20 lines of code with a particular purpose / question you'll get a much better answer than feeding 200 lines of code with "what's wrong here?" You still need to know 90% of what's going on, and it becomes very good at helping out with that 10% you're missing. But for all I know it is just really bad at C++, that wouldn't surprise me. The things I'm using it for are definitely more simple.


I do think this is why I sometimes get amazing results, and other times I have to go over a snippet of code so often I just give up and do it myself. It's a matter of how the question was asked in the first place.

Knowing that, it makes sense that your prompt should be as specific as possible if you want the results to be as specific as possible.

The best results I got was feeding it Lisp code that I wanted translated to C (to compile it). It took very little effort on my part because I described what each of the snippets did separately, and the expectation when combined and used together.

Through this, I learned that C doesn't have anything akin to the Lisp's (ATOM). ChatGPT stated clearly that its version of ATOM should only be expected to work in the code it was writing, but might not work as expected if copied out for another use of Lisp's (ATOM).

I asked it to give examples of where it wouldn't work, and it gave me an example of a code snippet that used (ATOM) that would not have worked correctly with the snippet that did work correctly with my original purpose.

Having said that, I myself learned that working with code function by function with ChatGPT, and being explicit about what you need, gives very good results. Focusing on too many things at one time can derail the whole session. One or two intermingling functions works great though.


GPT4 works best when you assume that you're the professional dev with decades of experience, whereas GPT4 is a bright and broadly-informed co-op student lacking in experience in getting stuff working. You have to have a solution in mind, and coach it with specifics. And recognize the tipping point where it takes you more keystrokes of English to say what should be done, than keystrokes in Vim to do it yourself.


I did prompt engineer, using the 'you are an expert, desribe to student with examples' in many different variations.

In my testing prompts did not unlock an ability in GPT to grok the structure of code.

Empirical testing of LLM's is going to prove and map out it's weaknesses.

It is wise to infer from intution and examples what it can handle, leave the empirical map of it's capabilities to the academics, for the provable conclusions.


My observation (which could be wrong) is that ChatGPT as a programmer's aid is only useful for the simple cases. Not so much for complex stuff, and certainly not for something as complex as the Unreal engine.


Do you have some sample chat logs of interactions like this you can share? I'm curious to see what kind of stuff it's coming up with, and how you're prompting it.


I think he means something like this discussion I had with it earlier today about assembling squat racks. I literally knew zero about the topic https://chat.openai.com/share/a74bb56b-fc5b-40ec-90a9-f46cc7...

A few weeks back I was looking into how white supremacy works cause I didn't get it at all. We both came to a nice insight (it's a lot like a business monopoly) https://chat.openai.com/share/930e257f-addd-4371-ac37-370261...


I don't tend to keep the chat logs, as the amount of them gets unwieldy very quickly. But examples of things I've done with it that are useful:

I wanted to create a web app, something I haven't done in a very long time. Just a simple throwaway back-of-the napkin app for personal use. I described what I wanted it to do, and asked what might be a good frontend/backend. It listed a few, I narrowed it down even more. Ended up deciding on flask/quasar.

After helping me setup VS Code with the proper extensions for fancy editing, and guiding me through the basic quasar/flask setup, it then was able to help me immensely creating a basic login page for the app. Then it easily integrated openAI api into it with all the proper quasar sliders for tokens/temperature/etc. Then it created a pretty good CSS template for the app as well, and a color scheme that I was able to describe as "something on adobe color that is professional and x and x (friendly, warm, whatever you want to put in)". Everything worked flawlessly with very little fuss, and I'd never used flask or quasar before in my life. You can also delve VERY deep into how to make the app more secure, as I did for fun one evening even though it's not going to be internet facing.

Another thing I did was go over some pfSense documentation with it. I had some clarifying questions about HAProxy, as well as setting up Acme Certificates with my specific DNS provider. It was extremely helpful with both. It also taught me about nitty gritty settings in the Unbound DNS resolver in a way that's much more informative than the documentation, and helped me set up some internal domains for pihole, xen orchestra, etc with certificates. Also helped me separate out my networks (IoT, Guest network, etc), and taught me about Avahi to access my hue lights through mDNS.These are things I always wanted to do, I just never felt like going down a google rabbit hole getting mostly the wrong answers.

Last example I'll give is it was able to help me set up docker-compose plex within portainer that then uses my nvidia GPU for acceleration. The only thing I had to change from the instructions it gave was to get updated nvidia driver #s and I grabbed the latest docker-compose file. I'd never used portainer in my life before, nor do I have experience with nvidia drivers within linux, and I feel like learning it was many times faster being able to ask a chatbot question vs trying to google everything. Granted I still had to RTFM for the basics, as everyone should always do.

I think perhaps my use cases are a bit more "basic" than many HN users. Like I said I'm not asking it to do problems most humans wouldn't be able to do, as I know it isn't quite there yet. But for things like XCP-ng, portainer, linux scripts, learning software you've never used before, or even just framing a problem I'm having in steps I hadn't thought of it's been invaluable to me. For me it's like documentation you can ask clarifying questions to. And almost none of the things I've asked it would work at all if it were wrong, I would know immediately.


>Googling a problem gives you all kinds of results, which you then need to dig through and understand in order to evaluate whether there's something that can be adapted to your use case.

Exactly; search engines give you those blue links and short descriptions of the search results which are not enough for you to grasp what is the website about. I think what search engines need to do is tackle the complexity of going through the results of a search engine. Google page rank seemed like a silver bullet back in the day but the websites which are the most popular are not necessarily of the best quality. What we need is to lower the complexity for casual users when they deal with search results.

On the other hand ChatGPT is like an answer machine that can give you satisfactory answer on your fist try but if not, you need to talk with it, push it and explore what answers it gives you, just like you said. I think ChatGPT type search engine will be more suitable for people who are "lazy" or for the people who don't have time to "Google" and go through search results and look around the web for the helpful and useful information.


> I think what search engines need to do is tackle the complexity of going through the results of a search engine.

This is exactly what I don't want a search engine to do for me. Going through the list of results and evaluating them is an important part of my process, if what I'm trying to do is learn something new.


So how do you differentiate blue links that all look the same? I mean yes there is description of a search result but that doesn't tell you very much about the website or the quality of information that you are getting. Bing did a good thing with their annotations[0] because my thinking is that something like annotations lower the complexity of browsing and skimming through search results.

[0] https://blogs.bing.com/search/2022-08/Shopping-Searches-are-...


> So how do you differentiate blue links that all look the same?

They don't all look the same. They all tend to go to different places. I find that it's reasonably easy to spot a great deal of garbage sites just from their domain name or url, and that weeds out a large chunk. Ignoring multiple results for the same site also weeds out a large chunk (I only need one of them).

The rest, I just click on and take a look at the page. It's pretty quick and easy to weed out most of the garbage ones with a quick skim.

The rest, I sample, read captions and boxes, skim paragraphs and such to determine if it's along the lines of what I want. That's pretty quick too.

For the most part, it's the same process that you use when researching in a library.

The reason that I want to do this myself rather than outsourcing it is because I'll inevitably learn something in the process that will shift my viewpoint to one that's more targeted or meaningful for the purpose I have in searching.

It doesn't matter how good the engine is at collating and summarizing results -- even if it's perfect, my understanding not only of what I'm looking to learn, but also discovery of important but serendipitous or unexpected knowledge, is lessened.

It's a bit like the difference between reading Cliff's (or Cole's) Notes about a book and reading the book.


Yikes! I mean ChatGPT-style search engine is more fit for noobs and the faint-of-heart people (the ones with nothing in the world for them IMO).


I've tried ChatGPT inside Microsoft's Bing search engine and it is pretty good so far. It definitely saved me time that I would lose clicking on search results and skimming through websites looking for useful information.


So many small things go faster. For example, I throw the output of Windows-Shift-T (power tools keyboard shortcut for screenshot OCR) text into ChatGPT with a "remove line breaks" prompt. Yes, I know how to Ctrl-H ^l in Word, and other ways, but they sometimes produce odd results (missing spaces, extra spaces), and GPT is faster.


> then someone will come along and say "I asked Bard what it thought about this issue, here's the response"

At least they tell you where the text came from, so you know to skip it. It's worse when they just post an LLM response as their own.


This would annoy me significantly. Can I ask what field you're in?


I'm a software developer. Working, in part, on AI systems, ironically enough.


I love this, honestly. If I don't notice, that's a win for AI. If I do, then I just treat it as the new Rick Roll.


I'm getting people filing tickets for... let's call them complex, medium-large projects that they want implemented. Helpfully, they're including ChatGPT "instructions" for how to do it. I got my first "but ChatGPT said..." argument about why I'm wrong about something just yesterday.

Somewhat more general, but I've pretty much already decided that if I find people using it to talk to me without telling me, I won't be talking to them. Goes for businesses as well as personal things - don't gaslight me, or you will lose the option to do so.


As a fiction author, I don’t feel gpt writes good fiction at all. It’s a non-fiction tool, just one that lies to you constantly. Half of what it says is a lie…


Yes, it’s terrible at writing fiction. Cliché laden, dull, repetitive prose and somehow, in terms of actual content, it never generates anything of interest. Its desire to always wrap things up into a happy ending within a couple of paragraphs also means it’s almost incapable of generating conflict. At best it can generate teenage fan fiction — and bad teenage fan fiction, at that.


Why do people use bard, like genuinely, it just gives way worse responses than bing esp on creative mode, the only good thing bard has going for it is that the frontend programmers knew what they were doing and it doesn't auto scroll up when it types.


> And then out comes five paragraphs of text that I might as well not even bother reading, as it often contains falsehoods that I have to waste time looking up and double-checking.

I'm curious on if you feel human generated content does not contain falsehoods.


I consider code to be non-fiction, and ChatGPT will generate stuff that compiles and works most of the time. There's no need to triple-check the code output.


With the Code Interpreter beta active, it will compile and run that code in the same chat window.


But this is not really an accurate comparison.

"Code" is really a much much much smaller and much much much more structured output than "English words".

Presumably, the system was trained with a very small amount of "untrue code" in the sense of stuff that just absolutely could never work. And also presumably, it was trained with a lot of free-form text that was definitely wrong or false, and highly likely to have been originally created to be purposefully misleading, or at a minimum, fiction.

That the system outputs reliable code tells us nothing about its current ability to output highly reliable free form text.


But it understands English words and translates the meaning behind them into code very well, particularly with a bit of iteration. Simply taking the interface from writing code to speaking in plain language is a huge practical accomplishment.


> Given that ChatGPT was just released last fall and it was expected to revolutionize practically everything, isn't it still telling that usage is falling?

Its not, though. ChatGPT, the website is basically an (increasingly non-exclusive, given BingAI) frontend. What is claimed to be revolutionary is the underlying models (in the narrow view) and similar generative AI systems (in the broader view). As more applications are built with either OpenAIs own underlying models or those from the broader space, the ChatGPT website should be expected to represent a smaller share of the relevant universe of use.


This is the correct viewpoint. ChatGPT is one specific implementation of the technology, which was most people's first exposure to it. The broader applications of the technology itself are still in the very early stages, but are already making significant impacts.


Honestly does it really matter much if their B2C usage is primarily students?

I’m more curious about their B2B operations which is likely what will trickle into more people’s lives. Their APIs seem to enable quite a few interesting possibilities with a low up front technical investment.

Too me the whole “generate me a bunch of text and display it as text” is a niche use case for a lot of people. Integrations for web search, document search, summarization, data extraction/transformation, and sentiment analysis are more useful and less likely to have hallucinations affect the end product.

Regarding their revenue I’m curious how the Azure hosted OpenAI services work for OpenAI. Billing is all through Microsoft and the documentation tries really hard to make it clear these are Azure services. I wonder if Microsoft just pays a licensing fee or if there is some revenue sharing going on.


> Honestly does it really matter much if their B2C usage is primarily students?

For ChatGPT specifically, yes, because it keeps getting hyped so much, and I think B2C is going to be what ChatGPT ends up with. My company isn't known for its tech innovation and we're spinning up our own LLM based on our own data. It doesn't need to be super fast because I doubt we're going to go the chatbot route. It will likely be for content generation, so less horsepower is fine. No need to pay OpenAI for excess capacity.

They could go public now with an unreal valuation based on the hype of B2B usage that may never materialize. Revealing that you're mainly a cheat/study tool for students puts you in a box with Chegg and others, at a much lower valuation.


> isn't it still telling that usage is falling?

No. Every tech company at the moment is scrambling to build LLMs into their product. That's where the real value is going to be.


> If they're really experiencing a decline in users that's not great for them.

If it's the case that "there are new tools coming, but most of them aren't out yet" - and I believe it is[0] - then the overall userbase of ChatGPT doesn't matter to OpenAI, because soon enough the same models will come back with a vengeance, in a different, more streamlined form.

In fact, I feel that the major change will happen if and when Microsoft gets their Office 365 and Windows copilots working and properly released: they'll have instant penetration into every industry, including scientists, lawyers, doctors, and office workers.

--

[0] - It's been only few months. Between playing around, experimenting, then developing, testing and marketing a tool, there just hasn't been enough time to do all of that.


These metrics are based on traffic to chat.openai.com.

Speaking for myself, I used to use the crappy chat interface but I now exclusively use tooling I've built up around their API.

So, n=1, I'm using OpenAI much more, despite using chat.openai.com less.


I've always seen it as a toy, programed by humans, with human fallacies. It's a fun search engine for sure, but I think it's a decade (at least) until it becomes practical and useful.


The GPT-4 API has 5x or 50x'd some of my tasks. It's easily the most useful tool I've used in my entire life.


Was ChatGPT supposed to revolutionize everything or was it the harbinger of LLMs?


ChatGPT is the Killer App that made people notice the potential of LLMs that had been brewing for several years.


I see it as multiple things combining to reach a critical mass. Yes it was the killer app but it was also the first time it was actually good enough to chat with. Put another way- it's the shittiest LLM application going forward or the worst LLM application that people are willing to pay for. The hype isn't specifically for ChatGPT but imagining the trendline of LLMs or LLM adjacent tech projected forward combined with still falling compute cost and further democratization.


another factor in my mind could be users migrating to apps that are using gpt api instead of using the chatgpt ui itself.

only they can tell though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: