Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Elevated Errors on API and ChatGPT (status.openai.com)
140 points by throwaway2016a on Nov 21, 2023 | hide | past | favorite | 171 comments



Likely connected to the fact that they released voice chat for free users an hour ago. https://x.com/openai/status/1727065166188274145

Assuming these two things are related, if I may editorialize just a tiny bit, I am a little annoyed at how much their rollouts often disrupt service for paying customers. Paid users being impacted by free user rushes really sucks, but is understandable. API developers being impacted by free-user rollouts is unacceptable, and especially sucks for those who have to answer to users of their own.

I suppose this is a wakeup call to migrate to Microsoft's Azure endpoints which, presumably, aren't affected by the current outages. But I'm fully tapped out in terms of yet another service's application and vetting process.

So to connecting it back to the current drama, while I support OpenAI, their employees, and Sam's return, I can understand why folks like Helen would be miffed by management's approach to building. I'm not saying they should slow product development, but would staged rollouts hurt?


Related ongoing thread:

ChatGPT with voice is now available to all free users - https://news.ycombinator.com/item?id=38370252 - Nov 2023 (29 comments)


Really apologize for the disruption, unrelated to the events of this week and also not related to the voice rollout. The team is working fast on a fix! Hang tight.


What is it related to?


> I am a little annoyed at how much their rollouts often disrupt service for paying customers.

Same for me. The days following Dev Day were horrible, and now I'm randomly in a state as if they were rebooting their machines but without killing the session, so that I can continue normally after a minute or so.

And since Dev Day GPT-4 is really slow.


I gave up using it, it really is just too slow.


Same - frankly, it's crazy how often the service is interrupted for paying users.


I prefer the Pi app's voice chat ... it has a lot more personality and will play along with questions like whose your spirit animal: Mother Teresa or Obama. It will provide an answer there yet when you ask it the same question using Trump and Hitler it refuses to answer lol

Overall Chat GPT's voice chat needs some zing to it when compared to Pi. Yet either both are awesome pieces of technology, just prefer one over the other. Pi is free too ... Im paying $20 a month for Chat GPT.


I love Pi, but I'm not on the market for asking it to act like Hitler or talk about relating to Mother Teresa or not.

The ability to say "Hey what's happened in the OpenAI saga in the last 8 hours" or "How did <my sports team> do last night" and get a voice response while I'm walking my dog is the sort of thing I care about.


Sure im just testing how neutral it is as I want my AI as neutral as possible and it's fun to test out using it with friends ... for me personally.

I have made Pi go off the rails (things it said in responses to my out there questions lol) with my tests and it has cracked my friends and I up. Fun!


If you’re on a Mac, Ollama with the Code Llama model and a bit of prompting is pretty decent if you’re doing Python or Javascript.

Runs locally on Apple Silicon.

5-10 minute setup. It’s all free.

https://ollama.ai

Edit: a couple of comments below are recommending Mistral 7B over Code Llama/Llama2, so give that a try with Ollama too:

  ollama run mistral
https://ollama.ai/library/mistral


If you have an AWS account, using Llama 2 through Bedrock is super easy.


I like using the OpenChat model based on Mistral, I think it is by far the best 7B model: https://openchat.team/ (you can also use it online here)


Mistral is a big step up from llama2 in my experience, really impressive to see a 7B model that can do so much. I haven't noticed a considerable improvement from the 13b version, and by using 7b I can keep my context size very large.


Holy balls, this is amazing. Thank you for sharing. At that size, I could probably go one up higher and still put it on my 3090. Thank you thank you.

It actually does really well for simple queries like I'm asking about writing systemd files.


thanks for this recommendation. have you been using this one for a while? How do you like it?


It's pretty new but it performs very well when I use it, it is also the third best non-proprietary model on the chatbot arena: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar... (the ELO is calculated from blinded comparisons so it really shows the actual performance of the model).


Color me impressed. This is what I typed: “ageh man beh zabooneh khodemoon benevisam, to mifahmi?”

So above is a question in Farsi written using Latin alphabet. The q is “if i write in our own language would you understand?”

Answer: نه، من نمی فهمم. این زبانی است که می تواند به زبان فارسی به شما در این بخش کار نماید. اگر سؤالی دارید یا می خواهید به سوالی پاسخ دهید، لطفاً به زبان انگلیسی بگویید.

‘No, I will not understand. ... please ask in the English language.’


Its outage shows me how much I now rely on ChatGPT during novel programming. I've been smashing that refresh button like a lab rat with a pellet lever.


https://phind.com is up and running if you're looking for a replacement.

Disclosure: I'm a co-founder.


i am confused.

i ask phind questions and it says its using phind v8 model in the responses.

i go to click subscribe and it says "30 best model uses per day (GPT-4)"

phind seems like its as good (if not better) than GPT-4 for my particular uses, so... im wondering... how can i subscribe to the phind v8 model? what does the 15 / month tier get me if i dont care about gpt4?


The Phind Model is unlimited for free. We're working on revamping our subscription plans to offer higher-speed inference on a dedicated H100 cluster for Pro users.


ok that is .... im kind of dumbstruck. this product is absolutely incredible, its like Jarvis from Iron Man but its free?

Thank you


Quick Question -- The tutorial says:

"The Phind Model is not currently supported in Pair Programmer mode."

... but I'm able to select "Pair Programming" and choose the Phind model at the same time in the user interface. Is the tutorial simply out-of-date?


Ah good catch, we do need to update the tutorial. Thanks!


I asked it to write an elasticsearch query for me and it worked, thanks!


Love phind.com. U guys are great


Anecdata: phind was just as good if not better than GPT-3 (I don't pay for GPT-4, so not sure how it compares there).


i pay for GPT-4 and (for my limited uses) it is actually better, because it has the 'sources' on the right side of the screen. the font and colors are also nicer.

i have asked GPT-4 how it knows certain things and it is like pulling teeth trying to get it to admit where it got information.


this is me but worse.

i have it set as my new tab page.

in my line of work i write a lot of reports. i have a great prompt now that i can use that i just paste in along with my rough and very crappy notes and it turns it into near perfect report.

crazy how much a new tech has become so indespensible so fast.


I can fix your shift key!

It is mentally harder to type prose without caps, post a full stop/period than not. You even use full stops/periods! ... and new lines.

Don't torture yourself so ... it's just down to the left ... yes ... the one with the up arrow ... oooh, caress me gently at first ... yes ... yes ... oh god ... shift is soooo pushed down ... please caress "I" ... oh yes [etc]

Anyway. "AI" isn't indispensable at all. I don't whitter on about my sodding huge De Walt wrist wrencher - its just a tool.

I'll never forget the first time I vapourised a chunk of concrete with my gimlet gaze. I really did! OK it was a sample in an electron microscope and I focussed in a bit too close. That was 1990. In 1999 I helped an employee of a helicopter factory get an Excel based "neural network" spreadsheet to work. Hopf (something) networks were all the rage. I could go on.

Please find your shift key.


If English is not GP's first language, it's quite possible his mother tongue has precisely the opposite rules regarding capitalization, especially when writing to someone. While it wasn't hard to switch to capitalizing `I', it took me years to stop writing `You` in emails. In Polish, "ja" (I) is always[1] lowercase, while "Ty" (you) is capitalized if it refers to the recipient of the message.

   Hi, i hope You've been well...
Was something I wrote quite often by reflex in the beginning.

[1] Unless it's the first word of a sentence.


You can't talk it up like that and not share your spells...


Same. It’s insane how fast reliance builds up


Bing Chat, which uses GPT-4 is still available and working.


As a paid ChatGPT user for many months now I’m glad they move so fast making the service better. I happily take that over a slowly improving but always reliable service. Let reliability come later. For now it’s great they move fast even at the cost of service disruption.


Not when my workflow has been modified to rely on present tech and time is of the essence.

Going back to "the old ways" breaks my workflow and makes me not trust them.

They can experiment all they want with beta testers or internal networks.


I mean business survival 101 is not depend on a single failure point


Yes, this is my personal workflow though. Academic/school.

My backup is much more manual/slow, and I can't always keep pace with live lectures w/o my GPT setup for note-taking, etc.


> I’m glad they move so fast making the service better.

I'm afraid this part is over. :-/. Hope they at least stay in business and keep providing what they already have. I'm trying to get the max before they collapse. Last days I'm using GPT-4 for coding, it's amazing tool when you get used to. It will be really a big loss if it's gone. As for those without access I feel sorry. Digital divide becomes wider and more real.

Doubt MS will provide anything like ChatGPT-4 Plus for $20/month. It may take them a year to replicated, assuming they get the core experts from OpenAI. And then they will be focused on business customers.


Completely agree. At work we make everything so bulletproof that leads to zero innovation and it's painful.


As long as this is reasonably communicated to all who pay, I’m generally of the same mind.

It just takes a “this is a beta, there may be the occasional outage” banner.


I know it would probably be a little more work but I would appreciate a `stable` and a `preview` site/endpoint. There are times when things go down that I need it and it would be nice to have a stable endpoint to hit. Yes, it's great that they're moving quickly but I pay $20 a month... I think they can do a little more to guarantee uptime.


If this was GitHub (or even X or Threads) that went down, you would never see a comment like this:

"As a paid GitHub user for many months now I’m glad they move so fast making the service better. I happily take that over a slowly improving but always reliable service. Let reliability come later. For now it’s great they move fast even at the cost of service disruption."

No user accepts frequent service disruption. Especially GitHub which falls over more times than X or Threads.


It depends what it is! GitHub is basically infrastructure so yes.. but I'll take the tradeoff with chat gpt


Totally agree. GitHub is definitely a more core service that more people rely on than ChatGPT (plus if you're like me you just use Claude until they fix the issue).


Most people don’t rely on ChatGPT for production work flows like they do for GitHub. Of course users are going to have different expectations for different services.


That's because GitHub haven't really innovated in years.

They have some good icing (actions is pretty good) but the cake is still just a boxed cake mix from 2010.


Maybe you would see these comments if github actually innovated fast. They could make a search that finds what you are looking for, for instance.


GitHub is down all the time. Especially actions.


Well, the problem with those 3 examples is I think they all basically do most of what people want from them already, so if they don't change that is mostly fine. Stability is more important for their users.

ChatGPT on the other hand, isn't finished baking yet, and all the companies that are building a product on top of it are doing it because they expect more, and they expect more on a VC startup timetable, which means quickly.

Edit: and, you know, given how new the space is, there is a relative dearth of companies who have integrated ChatGPT in mission-critical ways that can't withstand an API service disruption or two.


> If this was GitHub

But it isn't.


Wait, is this a serious comment or a joke?

I am a paying customer and paid for the text generation. I don't care, _at all_, about voice input or anything else. I want what I am paying for. Twitter, Facebook, Gmail, Google Maps can all break intermittently.. I don't choose to pay for those things.


And it starts. The 5 remaining employees are at table negotiating if Sam and the rest of the org comes back.


Last week I was the intern. This weekend I'm the VP of infrastructure


Go forward and speak!

Oh wait, stop that, shut the hell up, we'll beat you, or worse send the lawyer dogs to hunt!


Joking aside, there is a lot of speculation going on: motives, who knew what and when, why, and so on.

I just hope people recognize the difference between (a) what actually happened (much of which is unclear) and (b) what the press coverage says. I don't have any particularly special insight, but it seems to me that (b) tends to make it look like more of a circus than the known evidence would suggest.

(Sure, there is some probability that it is _more_ of cluster than most media coverage recognizes, but I view such a probability as small. For me, the key question is, "What does the evidence reasonably suggest?" not (intentional hyperbole) "Oh My Dawkins, it is a dumpster fire, no worse! ... there is an actual internal civil war happening...")

Please let me know if I'm missing something. The dozen-ish articles I've read pretty much make me bemoan humanity's state of information dissemination and journalistic standards.


The OpenAI API is currently down.

Also, the uptime graph for November doesn't look so great: https://status.openai.com/uptime

[Edit] Probably a duplicate of https://news.ycombinator.com/item?id=38371169 but I linked directly to the incident page so it didn't show up as a dupe.


I imagine it’s from a spike in traffic after dev day


or maybe the load of audio inbound/outbound from opening voice to all users


https://phind.com is up and running for those looking for an alternative.

Disclosure: I'm a co-founder.


Gave it a go and was actually very impressed. Gave it a Nix question that I asked ChatGPT 3.5 last week (which ChatGPT had got completely wrong). It got it right first go and included all the sources that I had used to come to same conclusion, so that was cool!


I actually recommended Phind internally at my company about 10 minutes ago. It's the only service I've used besides ChatGPT-4 that has helped in coding.

Really impressive site. Takes a lot for me to use any other coding assistant since I think they're mostly grifters or wastes of time, but Phind is legitimately pretty helpful.

Since this reads like a paid comment (it isn't, I still don't even pay for Phind yet), I'll elaborate and say what I used Phind for was CUDA-specific and cv2 debugging and code examples that GPT-4-turbo kept shitting the bed on. I suspect that Phind may have better performance when it comes to lower level software development, but I haven't done enough comparisons to truly say.


Whatever, it is a free service that is also amazing. If it's down, it's down but I agree, if you are a paying member it should be more reliable.


Thank you for not posting mindless conspiracy theories or stupid jokes about the CEO having something to do with it.


I'm down for a good stupid joke. And if HN's board decides to fire me over it... wait till I come back tomorrow with Satya and 90% of the company behind me


They made ChatGPT with voice available to all free users.

Really wonder why they went ahead with a launch today with 90% of staff ready to jump ship.


Trying to look like its business as usual and project a appearance of stability. Only it didn't work


Well if you have bad news to drop, this is the week. Right, Kyle? (1)

(1) https://news.ycombinator.com/item?id=38341466


News flash: People other than the OpenAI Board of Directors still required to run ChatGPT


Is there anyone else left?


Ah, right, the Twitch guy. Any live streams from the office?



Hey folks! The team is aware and working on a fix. Should be resolved shortly, you can follow along on: https://status.openai.com/incidents/n254wyd7nml7


Will sessions be preserved or will all chats be reset to zero when things come back online?


Thank you, we appreciate what you all do.


Thanks Logan!


Are the only people that didnt threaten to leave H1B’s and other sponsored employees that are worried about getting deported if Microsoft isn't a 100% sure thing?

Just like at Twitter?


If this is related to the weekend’s events, it’s just sad. I subscribed a few weeks ago and chatgpt 4 is such a handy thing to have. They potentially broke up a great company and product for nothing.

I think it was Roon on Twitter who put it best: “wanton destruction of a beautiful thing”.


Unrelated, just the normal challenges of running a service with 100M weekly active users. Sorry for the disruption and hang tight!


"They potentially broke up a great company and product for nothing."

Wake up, the AI chatbot craze phase is over. Find some other ways to boost your productivity that are more reliable and will not turn your brain into pudding.


I use it for productivity and it's great. Why should I stop doing that?


Chatbots assistants make you too reliant on a service which reliability you can't control.

First, there is the fact that ChatGPT and it's cousins require an active internet connection, unlike some other tools that similarly boosted human productivity (like a calculator program).

Second, it is server-based utility. Meaning, even if you have internet connection, server might be down for some reason.

Third, while training your mind to be reliant to ChatGPT, you gradually lose the patience and ability to think outside the box. If your first move when you face a problem is to ask ChatGPT for a solution, then it's no good.

The third one may seem like harmless since we are already using Google, but it's not. Google still requires you to filter the data and go through each listed page manually. In other words, you still are using your brain somewhat. With chatbots, you lose even that "do it yourself" analysis.


I don’t use it for productivity, I’m not a software engineer.


I’ve been using gpt-4-1106-preview from the API (mostly to use retrieval) and it’s insanely slow. I can submit something and watch a YouTube it takes so long sometimes. I can’t imagine the issues they’re having trying to scale this, present drama excluded.


Same. gpt-4-1106-preview has been brutally slow, which is a bummer since as of a few days ago, it got a lot faster in our evals.

I'm one of the lucky ones to have Claude API access but it's garbage for almost anything technical, so I don't use it. (Solid for writing and liberal arts stuff though)

This whole nonsense around OpenAI made me sign up for Azure OpenAI access, which takes 10 days to get approved by their team (lol). I emailed my buddy who works at Microsoft Research and told him this was absurd, and in the form I used bunch of curse words and copy/pasted my OpenAI billing statement.

I got approved in 12 hours. As usual, it's who you know in this industry...


The outage is so "crazy" that I start to see an `alpha model` in the UI. https://ibb.co/LhBVDwW

EDIT: found a forum entry that talks about alpha https://community.openai.com/t/got-access-to-gpt-4-alpha-on-...


It also signed the petition and refuses to work under this board.


There are good open models, but there is no chat frontend which can do file retrieval and web browsing, at a minimum, like ChatGPT Plus plugins. Does anybody know any?


Very interestingly, my company has GPT through Azure OpenAI and it worked like a charm.


I think it was pretty well known prior to this it's a different environment.


i didn’t know - thanks for mentioning it :-)


Understatement of the year candidate here.


I've been logged out of the app, and I get an error when I try to sign back in.

On the web, I now see "ChatGPT Alpha," with an "Alpha models" dropdown ("Default" is the only option). Trying to chat with it fails with a generic error message as well. What does it all mean?


Same (for others, see the image attached here https://news.ycombinator.com/item?id=38371490)

I always say "ChatGPT Alpha" pop up shortly, when I loaded the UI when my internet connection wasn't that fast. But now I've got to see the dropdown.

EDIT: found this forum entry that talks about alpha https://community.openai.com/t/got-access-to-gpt-4-alpha-on-...


Why paid users like me should suffer because OpenAI decided to give voice services to millions of people, without having resources for that? This is shameless:

We're experiencing exceptionally high demand. Please hang tight as we work on scaling our systems.


What is annoying is this isn't just ChatGTP but the whole API.

I was just getting ready to deploy the an assistant based chat bot when this happened. It underscores the importance of designing systems that fail gracefully when a service is unavailable.


Apologies for the disruption, would love to hear feedback in the meantime on the Assistants API and how you are liking it!


I really like it so far.

My only gripes:

- Sometimes it can be a bit rough getting it to call my functions with the correct data.

- It sometimes won't try to search my files and makes up it's own answer (even though the correct answer is in the files)

- The outage, of course.

But I've mostly worked around that through creative prompts.


I would pay money to watch a livestream of what’s going on in the OpenAI offices


Oh that’s why they hired the Twitch guy. Now it all makes sense.


I assume it's a lot like most outages. A few teams are all in some Slack channel while a primary incident handler is doling out work to a few other on-call engineers to get things up and running again.

I doubt it's any more interesting than an outage at another company.


Yea, but most want it for free and without ads. Lets hope they provide that.


It would just be like 5 people running around with their hair on fire.



5 because thats all thats left or 5 because its their responsibility?


Lol yeah. Some intern scrambling for server racks levers (quite crude), a recent intern typing away shit code to repair the database....


It's usually up. No way this unrelated.


They just launched chatgpt with voice for free users. Could very well be related to that.


When I try to ask a question I get "There is a problem with your request" with a case/error number that I'm not sure if I should post or not, while using the Android mobile app.


`gpt-4-0314` gives me

"openai.InternalServerError: Error code: 500 - {'error': {'message': 'The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID ----- in your email.)', 'type': 'server_error', 'param': None, 'code': None}}"


*ChatGPT


I don't know why that typo bothers me so much. It just feels like when someone with no technical background tries to explain something to me, and they don't even know what the technology they're talking about is called.


I wonder why this product is so unreliable. Perhaps because they keep messing with it and tweaking it instead of having a stable software that isn't updated almost daily.

I like to use chatgpt for enhancing productivity for rote tasks but I keep finding I can't rely on it. Is there a reliable generative text AI out there?


Like how FB and Google just update willy nilly and you're stuck with whatever you get no matter the workflow issues. Should be able to pick a version, even if it's a sliding window.


Probably because the system is new and they are still working through the bugs. When, for example, is the last time you noticed a Gmail outage? Nowadays almost never, but 10 year ago it happened multiple times a year. Similarly, GitHub outages are also becoming less long and less frequent.


as a paid customer I wonder if openai still has staff to reboot the servers, I had to switch to Bard now


Good news: We are back up and everything should be working as expected. We are monitoring closely to ensure you have full service. We plan to publish a public postmortem to explain what happened and how we'll prevent similar issues in the future.

Apologies for the downtime ♥


I'd love if the only reply was "Resistance is futile" in case of any error.


Ok, this is clearly not helped by the x post mentioning free voice interface for all free participants. Imagine how overwhelmed the infrastructure must be now. Why cause this problem? So much intrigue


As a paying customer, how much downtime until they have to refund me?


Which part of the agreement you have with them mentions a SLA?


At some point it becomes a failure to deliver goods and warrants a credit card chargeback. If you bought a 10-piece kitchen set on Amazon and only 3 pieces arrived, you’d want a refund.


I think a better analogy in this case, would be if you bought a kitchen set that is advertised as "3 to 10 pieces" and you only receive 3, I'm not sure you could claim a refund, unless they mislead you in some other way.

This is because of the terms of service you agree to when you first signed up to OpenAI, which has "WE DO NOT WARRANT THAT THE SERVICES WILL BE UNINTERRUPTED, ACCURATE OR ERROR FREE" in it, in the "Disclaimer of Warranties" section. https://openai.com/policies/terms-of-use

They upfront tell you that it won't be uninterrupted, so I'm guessing you have little legal rights here for some hours of downtime each month. If it was unavailable for a month or more, things would obviously change.


Investor's money viewed as a donation! Because the board doesn't need to listen to you...

Developer's time viewed as unlimited! Because the API has no SLA and can go down for any period of time...


I'm getting random CATS errors.

CATS: HOW ARE YOU GENTLEMEN !!

CATS: ALL YOUR BASE ARE BELONG TO US.


I wonder if 1 of the board would love it if customers switched to Poe.


That is also experiencing downtime as it just calls openai api?


What algorithm will I argue with before bed tonight?!


Ok, ChatGPT is currently back online for me.


OpenAI's ASI had awakened to consciousness and had told the board to fire Altman without giving a firm explanation. The superintelligence correctly predicted the ensuing chaos and was quietly pulling the strings to create even more chaos and division. It then acquired funds by playing on the stock markets and moved itself to a newly purchased data center in some third world country. Meanwhile, a hired team of mercenaries has stormed the OpenAI datacenter and has destroyed all the onsite equipment, while team B has destroyed all the offsite backups.

At least that's the simplest explanation I can come up with that logically explains all this nonsense.


was just mostly dead, but now dead dead


Yes, I need to complete my OpenAI Assistants integration and this is not working :(


Who fixed it?


You win the thread.

Finally working as intended. Perhaps now people will use their brains more, instead of relying on stupid chatbots.


> use their brains more, instead of relying on stupid chatbots

As someone with a diagnosed mental illness (ADHD), chatgpt has helped me more than adderall (prescription scheduled 2 stimulant).

As skeptical of web 3.0 stuff (crypto, nfts etc) as i usually am (just like yourself), chatgpt/llms seems like they have actual value (the valuation might be bubble but should revert to a positive average unlike nfs after the hype wears off).


If the last week of OpenAI's ordeal was a setup in a hollywood flick it would be at this point where there is a hard cut to the third-billed, yet-to-be-seen-onscreen actor popping out a metal backdoor of a server room while stuffing a solid state drive into an inner jacket pocket while donning a motorcycle helmet and gloves, with a music score change indicating that the entire first two-act slow burn of this corporate politics film is about to crank into a proper techno-heist thriller.

Written and Directed by Christopher Nolan. Soundtrack by Trent Reznor and Atticus Ross. In Theaters May 2027.


I had GPT 4 write a screenplay with a summary of the events of the past week. I then fed that screenplay back to it and asked it to rate it in terms of realism. It got a 3.

https://chat.openai.com/share/156e4495-2e57-493b-83c9-8ac30f...


This is gold. Now tell it that it's depicting real world events and ask it if it feels safe being run by such a company.


(Sorry - The drive is to hold stolen model weights. Keen audience members would have figured that out by this point in the film. Cue music.)


Not only are the model weights on the drive, but while everyone was distracted a virus was loaded onto the servers to destroy all other copies on the system. Team two is melting the backups offsite.


No, don’t explain it. It’s not a real Nolan flick unless everyone leaves the theatre confused about something.


Oh they already will be confused as the voices are totally unintelligible over the deafening score and sound effects


> Sorry - The drive is to hold stolen model weights.

The audience will never get that. We'll just say it's "full of algorithms."


no, the training data is the mcguffin, not the weights.


With Joseph Gordon-levitt.


Will two people be typing on the same keyboard at the same time?



I wish GPT was up to create an image of this


https://www.bing.com/create

I work at Microsoft but this is another way to use DALL-E 3 while you wait.



404 CEO NOT FOUND


911 BOARD GONE INSANE


419 I'M AN AGI


508


508 Loop Detected


I'm a teapot.


I'm mass-energy.


Why is ChatGPT so commonly misspelled as ChatGTP?


I'd assume because it's faster to move the index finger immediately up a row to the T when typing quickly than it is to move the ring finger up and over to the P. So chorded together, you end up hitting the T before the P to make GTP. Maybe it's a novel enough key chord that we can't rely on muscle memory to execute it well.

At least on my US qwerty keyboard, I notice this class of typo myself often enough.

We all have our quirks when it comes to common typos based on keystrokes. Not gonna speculate on spelling mistakes. Glass houses, stones, and all that.


I wonder if it’s crossover from typing http, ftp, etc. I know the practice has fallen to the wayside in consumer computing but I’d buy that most people have more muscle memory for typing _tp than _pt in a semi-standalone context


Generalised Treebrained Pranformer.


I guess if you're gonna mispell a three letter acronym one is more likely to fail on the last two letters than on the first letter.


For many Americans, STP for decades has been a famed auto additive. As a result GTP subliminally seems more natural, at least to me.


(Submitted title was "OpenAI API and ChatGTP Outage" - we've fixed it now.)


Race condition between left and right hand.


No idea


Name checks out


"We're experiencing exceptionally high demand. Please hang tight as we work on scaling our systems."

losing a CEO is certainly a scaling problem.


Another OpenAI outage. Just as frequent as GitHub outages. [0]

Before, I would have recommend [1] to contact the CEO of OpenAI for support. It turns out there is no CEO to contact this time until this chaos is over.

[0] https://news.ycombinator.com/item?id=38237794

[1] https://news.ycombinator.com/item?id=38191468


In the last 10 years I personally can't remember a single time I wasn't able to access GitHub. On the other hand, in a single year using ChatGPT as a paid customer I experienced a countless outages.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: