Upon logging in, I noticed it suddenly downgraded me to free although billing is current. People are reporting they also lost Plus subscriptions and all chat history.
You're probably not downgraded, but rather the UI defaults to free plan, and once the API confirmed you're subscribed, the state switches to the "Plus" one. Same for chat history and more. Just give them moments to restore the backend and it'll come back surely.
Personally I saw "You've been flagged for suspicious traffic" for a few moments before the whole thing went down. I'm not using any extensions or anything extra, just raw vanilla ChatGPT Plus and been using it almost daily since ~1 week after GPT4 launch, with no malicious usage at all. So guessing that was also a temporary message because of some state shenanigans.
It's not getting close to GPT-4. It's getting closer according to synthetic benchmarks but spend any non-trivial time with both and you'll quickly realise GPT-4 is still leagues ahead, especially for writing code and more complex reasoning. Which makes sense since the model is orders of magnitude larger parameter count wise.
Don't get me wrong, it's still remarkable that we already have LLMs that can be ran on consumer grade hardware that are anywhere near GPT-3.5/4 levels. But if you want the absolute highest quality of output GPT-4 is still way to go.
I have found it pretty decent at explaining math and physics concepts, and generating some basic code. It seems over-tuned for code generation (on purpose) as sometimes it inappropriately generates code when asking non-code questions.
Overall, it performs better than GPT-3.5-turbo in many use cases. Harder to quantify the GPT-4 comparison, as there are multiple versions of GPT-4 which are rumored to have significantly varied outputs.
It's definitely worth giving a shot. The parameter size of 34B makes a big difference, and it's been found that you're still better off extremely quantizing a larger-parameter model than using unquantized smaller models.
I have no idea what to do right now, I use ChatGPT all day while studying new concepts. It condenses time needed to understand complex new information by several magnitudes, and at this juncture studying without it seems pointless, it's rapidly become an extra layer in my brain.
Did you read my comment? I'm not saying I'm bored, I'm saying I wasn't initially sure how to best proceed with my day, which involves a lot of fast-paced knowledge acquisition which generalized LLMs make possible.
Next you'll tell me I shouldn't use a calculator, or a computer at all.
The conceptual level at which I work benefits massively from the recent developments in LLMs, and to stop training myself for the new meta means to drastically fall behind and possibly miss my goals. There is absolutely no reason not to evolve alongside this new technology.
Imagine that the calculator breaks, and you need to do some quick math. (Yeah, yeah, I know. Cell Phones and such.) It is good to know how to do math without a calculator for those edge cases where one is not available, or when you can't do the equations that you need with it.
Calculators are good. Calculators are useful. Calculators accelerate your workflow beyond what your ancestors could do. Not knowing how to do math without one is still a hindrance, hence why we still need math classes. You need to know the underlying theory of why the calculators do what they do in order for them to be useful.
It's the same with ChatGPT. ChatGPT is a fantastic tool that can benefit your workflow. I use it all the time myself. However, being 100% dependent on it for work is a dangerous game. If it goes down(like today) or the company behind it makes a change that makes it less useful, you still need to know how to do your job without it. That's why OP's comment is worrying. They said that they feel unable to work without it. It reminds me of that Avengers quote: "If you are nothing without the Iron Man suit, you shouldn't have it."
The point of my previous comment was that the kind of work I do is not "quick math" and taking GPT out of the equations reduces my velocity by magnitudes. Calculators aren't going to disappear and no astrophysicist is going to do massive multi-dimensional calculations by hand.
> If you are nothing without the Iron Man suit, you shouldn't have it.
It's a nice thought, but I can apply this chain of reasoning to no end of technologies without which scientific progress in a given domain would entirely halt.
> Taking GPT out of the equations reduces my velocity by magnitudes.
But you should still know how to do it regardless. Because of situations like this. That is the point of my comment. If you can't do your job without ChatGPT, you don't have any business working in your field to begin with. Even if it's at a reduced speed, you still need to know how to do your job.
>It's a nice thought, but I can apply this chain of reasoning to no end of technologies without which scientific progress in a given domain would entirely halt.
Not really. To do advanced stuff you have to understand the basics. This goes for almost every field. You can't build the next-level Javascript app without knowing what an if-else does. You can't be a doctor without knowing a little chemistry and biology. Even in a job like construction, you need to be able to do simple math to make sure your measurements are correct.
Saying that advanced tools should be used for things like programming without understanding the basics is a logical fallacy. It's the same argument that managers sometimes use. You know, the "programmers only copy and paste from stack overflow. Why do we pay you so much?" Asking chatGPT for code means nothing if you don't know how to apply it and search for bugs. And to use code from ChatGPT, you need to know how to do your job without it. Otherwise, you will only produce code that, at best, sucks and, at worst, doesn't work.
> But you should still know how to do it regardless. Because of situations like this. That is the point of my comment. If you can't do your job without ChatGPT, you don't have any business working in your field to begin with. Even if it's at a reduced speed, you still need to know how to do your job.
I'm currently using ChatGPT for a bunch of AI/ML that I don't know how the insides are working. But I'm able to build models from scratch that does exactly what I want, with 99% accuracy in my test cases, without actually knowing what the model does, but together with GPT4 + automatic hyperparameter tuning, I'm able to build models I can use in production.
Does it matter if I know exactly how everything inside in the model works, if I can get it to work exactly to my specification without it?
This is essentially how I started programming as well way back in time. I didn't know exactly what the Perl code I copy-pasted did, but if it solved the problem, it solved the problem. It brought me and my family out of poverty, and at that point I couldn't care less about how the magic actually was done, just that it did work.
Obviously now I have more knowledge about web field in general and 10+ languages that I no longer have to use any docs to be productive with, and maybe that'll happen with AI/ML eventually as well, but for a person who is starting with something new and wanna be productive quickly, GPT4 is a godsend.
>Does it matter if I know exactly how everything inside in the model works, if I can get it to work exactly to my specification without it?
Maybe not every single thing, but you should know what your lines of code that implement it do. You can't debug if you don't know why you wrote what you wrote.
>This is essentially how I started programming as well way back in time. I didn't know exactly what the Perl code I copy-pasted did, but if it solved the problem, it solved the problem. It brought me and my family out of poverty, and at that point, I couldn't care less about how the magic actually was done, just that it did work.
Ok. Great that it pulled you out of poverty. That's irrelevant to your argument, but Im glad for you. I guarantee you that the code sucked regardless. You might not care that you produced software that sucked, but it still guaranteed sucked. If you copy and paste code without knowing what it does, you are a bad programmer. Someone, somewhere, is going to have to clean up your mess. And they are cursing your name right now.
> Maybe not every single thing, but you should know what your lines of code that implement it do. You can't debug if you don't know why you wrote what you wrote.
That's the thing though, the point of code is not to be perfect in isolation, it's to solve a problem. And if you eventually can solve a problem by treating it as a blackbox, who cares?
> That's irrelevant to your argument
It's not though, it's to illustrate that you can have a real-life impact using code that you don't understand at the time.
> I guarantee you that the code sucked regardless
That is irrelevant though, because no matter if it sucked or not, it worked and solved a real problem, which is the reason we (I at least) write code in the first place.
> Someone, somewhere, is going to have to clean up your mess. And they are cursing your name right now.
Well, and here I am cleaning up someone else's mess, so what? Life goes on.
You seem to fall into the classical programmers trap of thinking that code has to be beautiful just to look at in order to be valuable, and anyone who disagrees is a shitty programmer and it's their fault you have to refactor some shitty code right now. It's not, they're not, and it's not their fault.
There are so many things wrong with this argument that it will take me a while to list every one of them.
>And if you eventually can solve a problem by treating it as a black box, who cares
When the inevitable security bug gets introduced because you decided to be lazy, you will care. Black boxes make spaghetti code that is inherently buggy and insecure from the get-go. My current job is to clean up from programmers like you who don't give a crap about the code as long as it "just werks." Am I glad to have the job? Yes. Does it piss me off to see such blatantly terrible code? Also yes.
>It's not though, it's to illustrate that you can have a real-life impact using code that you don't understand at the time
Guess what? Programming lifted me out of poverty as well. I never use black boxes and never have. If I can't understand the stack overflow post, I don't use it. I find a solution I can understand. It's irrelevant and a logical fallacy. "well using shitty code helped me not be poor so it's good lmao!" Just no.
> That is irrelevant though, because no matter if it sucked or not, it worked and solved a real problem, which is the reason we (I at least) write code in the first place.
If writing spaghetti code causes more problems than it solves, it's not irrelevant. And guess what? It does. It might be years before these problems are revealed, but when that code gets exploited over something simple that a cursory understanding could have solved, then yeah. That's on you. It "just werks" is not a valid excuse.
> You seem to fall into the classical programmers trap of thinking that code has to be beautiful just to look at in order to be valuable, and anyone who disagrees is a shitty programmer and it's their fault you have to refactor some shitty code right now. It's not, they're not, and it's not their fault.
For one, ad hominem. Second, Beautiful code =\= good code. I have seen terrible code that was written beautifully sticking to a single programming style. I have seen great code that looked a little messy. I can tell when the programmer behind the code knew what they were doing or not. I like beautiful code, but I prefer secure code.
Also
>and it's their fault you have to refactor some shitty code right now.
It...literally is. They wrote it. If they write shitty code, and I'm the one that has to fix it, the blame falls on them for writing it in the first place without any quality in mind.
Have you ever managed another programmer? Asked them to produce some code, received it, pointed out flaws or inefficiencies, tweaked it, and even learned something new from their process?
That's how you treat ChatGPT. What you are displaying is ignorance on how to best grasp these tools, and wrapping it in a superiority complex doesn't make it more palatable.
Try being less negative and close-minded, and explore how these tools can augment your existing workflow. If you lack the capability to differentiate good from bad code, maybe you are just too inexperienced to rely on the tool at an advanced level. If you don't lack that capability, then I fail to understand what the problem is; GPT has vastly sped up my productivity.
>Have you ever managed another programmer? Asked them to produce some code, received it, pointed out flaws or inefficiencies, tweaked it, and even learned something new from their process?
Yes, I regularly fix ancient code and perform code reviews.
>That's how you treat ChatGPT. What you are displaying is ignorance on how to best grasp these tools, and wrapping it in a superiority complex doesn't make it more palatable.
Are you literally illiterate? In my initial argument towards you, my main argument was that it's okay to use, but you need to know how to do your job even without it. See what I said: "It's the same with ChatGPT. ChatGPT is a fantastic tool that can benefit your workflow. I use it all the time myself. However, being 100% dependent on it for work is a dangerous game."
My argument with the other guy is that you are a bad programmer if you blindly copy and paste code from chatGPT without knowing what it does. You have to know the basics before pasting it, otherwise, your code becomes totally unmaintainable. This other guy believes that not only is it acceptable, but preferable to post code without understanding it as long as it "just werks."
> If you lack the capability to differentiate good from bad code, maybe you are just too inexperienced to rely on the tool at an advanced level.
> Not really. To do advanced stuff you have to understand the basics.
Doesn't that defeat the whole purpose of these abstractions? Can you read the machine code that your C compiler produces? How much of electrical engineering do you need to know to write a bash script? The physics of how a NAND gate is implemented?
It's obviously in the early stages, and I don't disagree with you completely today -- but this will just be one more layer on top of an already deep stack of abstractions that underlie all of computing.
>Can you read the machine code that your C compiler produces? How much of electrical engineering do you need to know to write a bash script? The physics of how a NAND gate is implemented?
Unironically, I can and do. I took classes in college for things like ASM and logic gates. That's not the point I was making, though. The point is that you need to know how to read your code so that you can fix it if you have to maintain it. Or, if ChatGPT is down(like today) or not giving you the right answer, you can still do your work, albeit a little more slowly. My worry is that people will just plop whatever into a compiler, and leave buggy code that introduces bugs and security vulnerabilities. An LLM is only as good as its data source, and with things like ChatGPT and Github copilot, that data source is programmers both experienced and inexperienced. Use it, love it, but don't rely on it. Implement best practices, and use your head.
The calculator is not a service but a tool. Until LLMS don't become just a tool don't rely on them too much or expect it be broken and have a workaround for that.
I have backup local LLMs which I used during the outage. This doesn't prevent the fact that for now, ChatGPT-4 wins out in output quality.
This won't remain true for long and so it is actually harmful to my career to not invest time learning how to use these tools now, instead of waiting for the time when they are perfect.
ChatGPT is the black box that is pushing the buttons of the calculator for you. You don’t learn maths or programming with this service.
If the calculator is broken, I can still work slowly but I understand what I do. Without experience, you can’t understand what the black box is giving you.
This is why wherever I travel in the world I print out paper maps of every city and village I think I might go, I don't want to get a dependency on a digital map. My paper maps have a 100% uptime.
I just use it to kick start my learning process. It's basically the quick summary of Topic. And I ask GPT to cite sources. Then you can jump into to more authoritative articles/etc. Can't trust it blindly.
Yeah but if I'm doing research online I'm going to stick with sources I consider reliable, written by real people and not a text generator. So I'm not sure I understand the comparison you're trying to make.
I read scientific papers, articles and references in one pane while keeping GPT open in another pane to help me make the most use out of the knowledge in the quickest timeframe. I frequently browse additional resources in order to corroborate information.
Please do not project onto me. Ask questions about my process before assuming I'm "depriving myself", which you are likely ironically doing yourself in light of your attitude towards GPT.
have you ever considered devoting your full attention to what you're reading? And that doing that enough will improve your scientific reading comprehension to the point you don't need to rely on chatgpt mangling the information into nonsense?
> have you ever considered devoting your full attention to what you're reading
have you ever considered being less assumptive and judgemental? you have absolutely no insight into my reading comprehension ability, and have no idea what my process is like.
> you don't need to rely on chatgpt mangling the information into nonsense
except that doesn't happen? It only strengthens my understanding by allowing me to ask questions?
you really need to look at how you're approaching this conversation and calibrate. instead of this mess of assumptions and loaded questions, ask a real, open-minded question such as "what does your workflow look like? what are the pros/cons of this approach?"
if my system works for me, I don't need to prove it to you, however you yourself are missing out on a new style of research which will become incredibly common.
The simple answer is don't rely on information you find on the internet, SEO killed that years ago. And chatgpt was trained on that awful mess. Go read an actual book
You and your sibling commenters made a lot of assumptions about my workflow without asking the right questions, and by and large you are totally wrong about my approach.
Try out Bing Chat. GPT4 is the basis for Bing Chat. It can be annoying with its message limits, lack of chat history, and tendency to point you to websites. Though, you can switch it to creative mode which is pretty similar to the workflow of ChatGPT, the GPT4 version.
Thanks for the suggestion. Unfortunately, Bing Chat leaves a bad taste in my mouth due to user agent restriction and other shitty behaviors, which I have no desire to work around.
Chatbox is neat. I have API access but I'd rather not pay for large-context GPT-4 requests while I already have pro, it's good to spend this time improving on llama.cpp's built-in chat server if you ask me :)
it generally seems to be entirely seperated ive had multiple instances where the ui was slow, but i could easily use the api, i ended up building my own client for funs and its pretty useful now aswell :)
Unless development is freezed, I don't think locally hosted LLMs have a 100% uptime, but surely the uptime is much greater than OpenAI's ChatGPT uptime at this point.
https://status.openai.com/
Upon logging in, I noticed it suddenly downgraded me to free although billing is current. People are reporting they also lost Plus subscriptions and all chat history.
https://community.openai.com/t/suddenly-downgraded-to-free/9...
I also got blocked by Cloudflare, so am unable to post with my Ray IDs. People are also reporting this.
https://community.openai.com/t/im-blocked-i-don1t-know-why/3...
People are reporting issues here:
https://community.openai.com/c/chatgpt/19