Our systems are offline, and we don’t know what to do. Typically when we have a problem, we ask the system how to solve it, but it says it’s unavailable. We have tried asking each other, but nobody knows what to ask exactly.
The JIRA example is a funny one. It’s like a workshop with a broken clipboard. Imagine not ordering a replacement one because there’s no task (written on said clipboard) to order a replacement.
The local internet connection is a necessary but not sufficient part of accessing any hosted service. More importantly, the service itself has to be online.
In the last year, if I accumulate all service outages at GitHub and at my office’s ISP, I think GitHub was out of reach for more than a couple of hours five times.
Your coffee machine uses AI to continually enhance your coffee experience based on the mineral contents of the water, the specific beans being used, your personal tastes and various other dynamic factors.
I mean, that's akin to proposing a situation where a power plant is it's own backup power plant. I or any human can't personally generate electricity when the system fails, but if I fail to design the system in such a way that something else will be capable of handling the failure with reasonably high reliability, I probably should not be designing systems.
Prepared a presentation for months to hold in front of my entire company. Has been down since 9:21 German time. Had one of the worst nights of my live. Now we had to postpone the presentation by two weeks.
In general I agree with you, but sometimes the “live”ness of demos is essential. I’ve given quite a few presentations about LLMs over the past year and a half, often to audiences unfamiliar with or doubtful about them. Being able to have on-the-spot, unplanned interactions with an LLM has been very useful for showing listeners that they are not just preprogrammed algorithms.
But I’m giving another demonstration in a couple of weeks, and this time I will be ready to show not only ChatGPT but also Claude and Gemini. And I’ll have some previously prepared examples ready in case the Internet connection goes down.
API not down. instead of waiting, started simple python code to interact with chatgpt. didn't see another repo for it so far, maybe someone else knows a good one
It shows me how much I used it to help rationalize my health anxiety. Now I need to decide whether I trust llama3’s answer enough; or if I should just get it over with and schedule an appointment with my doctor so she can confirm my fear that this twitch in my eye that’s turned into pain and irritation is a clear sign of a tumor.
I realized OpenAI are technically incompetent when they banned my API access for lack of payment even though my pre-paid balance has enough dollars for many months of access. They refuse to deduct the 50cents from the prepay balance to unblock my API access and are holding my money hostage. OpenAI is the only company that I have paid for but can’t use due to lack of payment with a very positive prepay balance, also they are refusing my credit card put of the blue that they happily accepted for a year.
This ended up being a good thing in the end because it forced me to integrate other LLMs into my project when before I would have happily only used OpenAIs services. I discovered LLaMa3 70b and Claude follow instructions better than chatGPT and in the end I have not missed ChatGPT.
OpenAI is still holding my money hostage and refuses to unblock my API access. Even after many messages to their support they just don’t care.
From my experience don’t build anything around OpenAI’s api, even if you pay they can revoke your access for nonpayment which is a paradox I never thought I would encounter. Other LLMs are more interesting than the over hyped chatGPT and if you self host you have way more control.
Tl:dr OpenAI is either technically incompetent or borderline fraudulent .
You know it has already when it procrastinates like a real human, gives joke answers to straightforward questions, and excels at fun tasks like composing slam poetry.
Maybe it is related to them updating the backend to support GPT-5 in production. I appreciate these downtimes should be planned, but since it's OpenAI, this may be it. One can only hope, though.
Our systems are offline, and we don’t know what to do. Typically when we have a problem, we ask the system how to solve it, but it says it’s unavailable. We have tried asking each other, but nobody knows what to ask exactly.