Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI Outage (status.openai.com)
130 points by zurfer on May 24, 2023 | hide | past | favorite | 154 comments



I like how the text-overflow: ellipsis on the title makes the status page look like it's embarrassed to admit there's a problem[1]

[1] https://imgur.com/LYfEsML


I apologise for the error. Here is the correct page: https://imgur.io/LYfEsML


That's a bit strange... it doesn't happen for me. I thought it might be a zoom level thing but I've tried a wide range and the text never overflows.


Visit in a phone to see the elispsis


Almost like a human


Fortunately for OpenAI, they have no SLAs: https://help.openai.com/en/articles/5008641-is-there-an-sla-...


I would like to see them offer a decent SLA, but for an increased price.

Ie:

No SLA: $1 per 1000 requests.

With SLA: $2 per 1000 requests. For every minute of downtime, we refund 50% of your daily bill.

Obviously they are free to design their systems to make SLA'd requests have priority when there is a capacity crunch or service issues, and that is really what those customers are paying for.


I hate to talk my own book, but Microsoft does offer SLAs for Azure OpenAI.

https://learn.microsoft.com/en-us/azure/cognitive-services/o...

Same model availability as well.


by SLA you mean: on outage the status page won't be updated for hours

then they'll refund you 2c for the 8 seconds they say they were down for


As is tradition.


This is an absolutely ridiculous ask for a company which is already unable to service the extraordinary demand for their product. AWS itself only credits a month if a service is down for over 30 hours[1]

People are going to use OpenAI if it's good. Nobody really cares about SLA if the product is incredible.

[1] https://aws.amazon.com/legal/service-level-agreements/?aws-s...


If this is a hard requirement, then perhaps parametric insurance[1]?

I don't have any examples at hand but I've heard about an insurance offer tied to SLAs for external products.

[1]: https://en.wikipedia.org/wiki/Parametric_insurance


So essentially the price as for without the SLA?


No... 10 minutes downtime means they refund 5 days of typical billing...


Would this really be worth it for them when they can just charge everyone the $2 and tell them to pound sand when there's an outage? Not like there's a proper competitor yet.


It's a form of price differentiation.

Let the people who have money to pay for a very slightly better service give you more money.


Ah, now I see it, thanks.


OpenAI (and everyone else, for that matter) should consider moving their status page under a domain other than their main one (openai.com), to prevent the status page itself from becoming unreachable in case of a DNS outage.


The open secret about status pages is that they intentionally or unintentionally become marketing artifacts. It's very much beneficial to keep it running on the same system it's monitoring, as this automatically ensures the page is always either green or unreachable. Anything else would be bad press.


>>always either green or unreachable. Anything else would be bad press.

And thas is a great example of companies run by marketing ot finance people are untrustworthy.

They'll make decisions on the narrow interests of short-term gain or loss-avoidance without consideration of building inherently great products and services and an organization you can trust.

In this example the Status page always looks good, and maybe isn't "technically" dishonest, but it is fundamentally dishonest and in reality, useless.


When my kids ask me why they should learn to write since they can just use AI, I'll remind them of outages like this. I understand that we rely on calculators and don't memorize as much arithmetic as we used to. But we never had to worry about a coordinated 'calculator outage', where access to calculation was unavailable.

It makes sense to use these tools, but we need to remember that we revert back to our human ability level when they are offline. We should still invest in our own human skills, and not assume that these tools will always be available. There will be minor outages like these, and in the case of cyber attacks, war or other major disruptions, they could be offline for longer periods of time.


The reason is that writing yourself is a critical thinking tool. It helps you work through the logic and arguments and has benefits well beyond just the content that gets put down. It's the journey, not the destination that matters!

Also, don't outsource your thinking to AI or the media (mainstream & social)


I'd give an extra reason. ChatGPT (both 3.5 and 4) seems to be doing an amazing job writing things for us, but that's strongly conditioned on everyone being able to write on their own.

When I let GPT-4 write text for me (parts of e-mails or documentation), I rely on my own writing skills to rate GPT-4's output. The problem I'm solving isn't "I can't write this". I can. It just requires putting in effort I'd rather avoid, and it's significantly easier to criticize, edit or get inspired from existing text, than to come up with your own from scratch.

People who decide to rely on LLMs instead of learning to write will face many problems, such as not being able to express what the LLM should write, and not being able to quickly evaluate if the output is any good.

Of course, if we keep the current rate of progress, few years from now the world will be so different that it's hard to guess how communication will look like. Like, I can imagine GPT-6 or GPT-10 being able to write perfect text based on contextual awareness and me making a few grunts - but if this becomes ubiquitous, we'll have bots writing texts for other bots to translate back into grunts, and... it's really difficult to tell what happens then.


I'm actually not sure this is true. I think in general it is easier to evaluate a work of art, writing, etc. than it is to create that thing. I don't think we know yet whether kids who are allowed to use chatgpt to their hearts' content will not learn how to write, or know what good writing is. My suspicion is that they will still be able to tell good writing from bad, and they may learn how to prompt a chatbot to tweak output to make it better.

I think what they will really lose is the perseverance that it takes to write, and various other skills that are made possible by being a skilled writer. They may gain other skills along the way, of course. But it's a big unknown.


Evaluating existing work is often MUCH harder than creation. It often 'feels' easier because most people do a terrible job and offer little more than their opinions. Some intuition: have you ever eaten something you really disliked but you were able to tell that its actually very good because you understood that not all tastes match yours and you can evaluate the various techniques and elements that went into creating the dish? Perhaps more relatable intuition: finding bugs and flaws in a large code base is much hard than writing the code to start with.


Okay, let me tweak my assertion a bit. I agree that evaluating is easier than creating - that is the basis of my assertion too. And I agree that in general sense, you don't need to write to tell if what you're reading is any good, for the purpose of making you informed or entertaining you.

However, when you're using ChatGPT to write things for you, you are in a different position. First of all, you're not the audience - you're the editor. This will make your determinations different, as you're re-evaluating a work you're iterating on, and core points of which you're already familiar with. The same applies to writing on your own (where you're both the writer and the editor for yourself), but writing forces you to focus much more on the text and thoughts behind it, which I believe teaches you to make more nuanced determinations, with much greater fidelity.

Secondly, when you're writing - especially if you bother to enlist the help of an LLM - you usually have a specific goal in mind. Writing a love letter, or an advert, or a business e-mail, or a formal request, etc. requires you to think not just in terms of how it "feels" to you in generic sense of entertainment or knowledge transfer value. You need to put some actual thought into the tone, structure and the wording. Writing, again, forces you to do that.

I can get away with letting GPT-4 write half the text in my correspondence at work (INB4 yes, I have temporary access to company-approved model instance on Azure), because I've written such text myself so many times that I know exactly whether or not GPT-4 is doing a good job (I usually end up having it to do two or three variants, and then manually mix sentences or phrases from each). Even with that experience, I still consider it to be a tool for overcoming mental blocks associated with context switching - I wouldn't dare just having it both generate and send e-mails for me. People who never got the experience of writing specialized correspondence, I just don't think they'd be able to tell if the output of an LLM is actually suitable for the goal one wants to achieve.


It feels like some limited p vs no problem -- I know p, but I want the ai to try the np solutions so I can check it with p


I'm sure I've seen that exact scenario on SMBC, but I can't seem to find it now…


ChatGPT is arguably a better tool for thinking than writing on a text editor, though.


It certainly has its place, but there's also a temptation to press the button instead of thinking.

Seen a few "as a large language model" reviews on Amazon a few months back; now the search results are for T-shirts with that phrase printed on them, and I don't know if that's because people are buying them or because an unrelated bot generates new T-shirts every time it notices a new catchphrase.


Probably a person who doesn’t think with chatGPT won’t be thinking through writing either? I don’t think I’m thinking less with chatGPT and I don’t think my 13 year old is thinking less either. It’s quite thought demanding, actually…


What is thought demanding about it?

I feel like I spend more time trying to coax it into staying focused than anything else. Not where I want to spend my time and effort tbh


Evaluating the veracity and relevance of everything it says. Reflecting on what it’s given me and determining whether it meets my objectives. And then the topics I use it for are thought demanding!

If you are using it for marketing copy, that’s one thing. I’m using it to think through some very hard topics — and my kid is trying to learn how photosynthesis works atm.


As I understand it, these models respond at the same sort of level as the prompts; writing like you're a kid, get a simple reply, write like a PhD and get a fancy one.

"Autocomplete on steroids" has always felt to me like it needlessly diminished how good it is, but in this aspect I'd say it's an important and relevant part of the behaviour.


The issue I am talking about is not about prompting, but a limitation of the models and algorithms below this layer. Prompting only exists because of the chat fine-tuning that happened at the later stages


Reading and writing have served humanity well.

We can see the impact of outsourcing thinking in modernity, via the simplicity of likes and retweets.

While ChatGPT can be a helpful tool, the issue is that many will substitute rather than augment. It is a giant language averaging machine, which will bring many people up, but bring the other half down, though not quite because the upper echelons will know better than to parrot the parrot.

Summarizing a text will remove the nuance of meaning captured by the authors' words.

Generating words will make your writing blasé.

Do you think ChatGPT can converse like this?


One might entertain a contrary perspective on the issue of ChatGPT. Rather than being a monolithic linguistic equalizer, it could be seen as a tool, a canvas with a wide spectrum of applications. Sure, some may choose to use it as a crutch, diluting their creativity, yet others might harness it as a springboard, leveraging it to explore new ideas and articulate them in ways they might not have otherwise.

Consequently, the notion that ChatGPT could 'bring down' the more skilled among us may warrant further scrutiny. Isn't it possible that the 'upper echelons' might find novel ways to employ this tool, enhancing rather than undermining their capabilities?

Similarly, while summarization can be a blunt instrument, stripping away nuance, it can also be a scalpel, cutting through verbosity to deliver clear, concise communication. What if ChatGPT could serve as a tutor, teaching us the art of brevity?

The generated words may risk becoming 'blasé', as you eloquently put it, but again, isn't it contingent on how it's used? Can we not find ways to ensure our individual voice still shines through?

So, while I understand and respect your concerns, I posit that our apprehensions should not eclipse the potential that tools like ChatGPT offer us. It might not just be a 'parrot' – but a catalyst for the evolution of human communication.

Though I'm hoping you didn't suspect it, I should warn you this comment was written by you know what (who?).


Ironically, this comment is better written than nearly all others under this post. I take LLMs to be net positive contributors to literary expression.


What is better about the writing? What about the argumentation?


Did AI augment your thinking on the matter or did it do the thinking for you?


You turned up the “smart” knob too high, clocked it at sentence 3, but a hearty +1 from me


I enjoyed reading. May I ask what you used as a prompt?


This is word soup just for the sake of using lots of fancy words. Be more concise ChatGPT... Bard is often better here

If ChatGPT did write this, as you allude to, then you didn't check your work. These counter arguments are distracted and irrelevant at times...

> Rather than being a monolithic linguistic equalizer

This has very different meaning than "language averager", from words to model (during training), vs linguistic equalizer, model to words (after training)

> it could be seen as a tool, a canvas with a wide spectrum of applications.

Yes, ofc, but we are talking about writing specificly, this is trending towards whataboutism.

> Sure, some may choose to use it as a crutch, diluting their creativity, yet others might harness it as a springboard, leveraging it to explore new ideas and articulate them in ways they might not have otherwise.

This is the point, not contrary to what has been said. The issue is with the crutch users. We know many people do this, yet this topic is barely mentioned, let alone addressed as the core of the discussion.

> ... the notion that ChatGPT could 'bring down' the more skilled among us ... Isn't it possible that the 'upper echelons' might find novel ways...

That is what I said

> but again, isn't it contingent on how it's used?

again, this is what I said, not reflecting this shows how limited this reply is in debate and argumentation

> What if ChatGPT could serve as a tutor, teaching us the art of brevity?

More whataboutism, irrelevant to the more focused discussion at hand

> So, while I understand and respect your concerns, I posit that our apprehensions should not eclipse the potential that tools like ChatGPT offer us. It might not just be a 'parrot' – but a catalyst for the evolution of human communication.

Sam might disagree here... though I do not completely. Why did it switch to "our" all of the sudden?

Not sure where I said it, but I have put forth the idea that it could, _could_, improve communication for many, by acting as a filter or editor. Again the issue at hand is that _many_ will not use it as a tool but as a replacement, there are many lazy people who do not want to write and will subsequently not critically read the output...

---

>> the issue is that many will substitute rather than augment

This is the core statement of my argument, "many" has been interpreted as something more, not partial. That it is lost within the reply is not surprising... a distracted and largely irrelevant word soup

In summary, this is the low quality writing one might come to expect from ChatGPT primary output, assuming the allusion is correctly interpreted... be clear if you use it

And sibling comments show that lack of critical reading and fawning for anything ChatGPT, whether it was or was not, people are assuming so based on your last ambiguous sentence.


Don't you think we'll get to the point where individual AI instances will be as ubiquitous as calculators? Or will it always require massive compute power that keeps the generative AI population low?


I think we will, but I think at least for a while they'll be cloud-connected. And at the very least, they'll be battery-dependent. I wouldn't want to be unable to write well when my AI assistant runs out of juice for the day.

I'd be surprised if we have solar-powered AI assistants in my lifetime, in the way that we have solar-powered calculators.


Just saying, Guanaco LLama model just got released that actually beats GPT-3.5 in a fair few metrics. So right now it's already possible to run a local version with a beefy GPU and it'll only get better as time goes on.

In a strange coincidence I've recently been doing some tests with small 10 W solar panels and with two or three of those plus an Nvidia Xavier (20 W TDP) one could actually run a solar powered LLM right now with only about as much solar as fits on a person's back (though only the smaller 13B versions).

Give it a few years and we'll have them integrated into smartphones. So yes, you will in fact always have an LLM in your pocket just like the ol' calculator excuse.


Good to know! I'd still be surprised if we had solar-powered AI assistants that are powered via ambient indoor lighting in the next few decades.


That I doubt, but then again we've come ridiculously far in the last 20 years and having AI assistants will only accelerate research further. If the singularity is really just 6 years out as some calculate, then anything and everything is possible afterwards. If you believe such things of course.


If the Singularity is only 6 years out then that means fusion power must be only 10 years out.


But who, these days, can write much without power-dependent devices? I still use a notebook but within days I have little ability to parse my handwriting, and I rarely transfer anything handwritten to device.


One of my brothers writes on a typewriter daily. It's just his preference and hobby.

I think we could switch back fairly quickly if we needed to do so.


With all due and complete respect, I think not being able to read your own handwriting a few days later is a minority position...


True, I have disgraphia, which runs in my family. But my sense is that many young people barely did any handwriting in the last few years so I’m guessing handwriting is still less likely to be employed during power outages than just putting off writing.


I’m damn sure that someone will make a solar powered AI assistant by 2024.


We have both itty bitty calculators and supercomputers, so pretty feasible to have both edge AIs and central ones.


individual AI instances = we (humans)


different, humans experience the world, machines can handle way more information and operate at speeds imperceptible to humans


The question makes sense: intelligence is getting commoditized faster than real human flesh.

I was doing online dating for a long time, as I'm a shy guy, but I realized that it became so different from real life (and connections that I make there are so fake because everybody's incentivized to lie), that I need to stop using internet for socializing.


Why do anything when someone on youtube is better at it? Play basketball? Ride a skateboard? Find a partner and make children? Speak words aloud? Play the guitar? forget it.


I'm surprised when friends insist on having candles around just in case there's a power cut — phones (and, if one insists on an independent backup, torches) just seem so much better.

Right now, where you say makes sense in the way candles used to make sense; but that's only because the good LLMs have to run on servers — there are lesser downloadable language models that run on just about anything, including phones and Raspberry Pi[0], and it's almost certain that (if we don't all die and/or play a game of Global Thermonuclear War etc.) it'll soon be exactly like having a calculator outage.

And if it's on a Pi, a solar powered calculator outage at that.

[0] What is the plural of Pi? Pis, Pies, Πs, or something else? Regardless: https://arstechnica.com/information-technology/2023/03/you-c...


Candles are hands free, and nice lighting. I don’t get why you’d try tell me a phone is “better”? A candle costs almost nothing and can also be useful to burn things.

I know it seems hard to imagine now but bad shit will happen and why but have a few $1 candles in a survival kit in your house ? Seems like a no brainer.


Candles are more expensive than torches of similar luminosity (I've bought multiple extremely bright torches from PoundLand and EuroShop; can you even find a merely-one-candle-power torch in a pound-shop/dollar-store? Or anywhere else when torches are usually advertised by the biggest brightness number possible?)

Candles are less hands-free than torches because they are a fire hazard when unattended; also, you can turn an iPhone light on or off with "hey Siri torch on" etc., unlike a candle where you need to find both it and the matches first before you get started, instead of simply vocalising in the darkness to summon forth illumination like a wizard.

In fact, that fire hazard thing makes candles more likely to be the cause of rather than the solution to any serious problems I might face.


Why not just have both?

I cannot tell you how many LED camp lights I have that have broken or batteries that have failed, including LED Lenser which apparently are a quality brand. Randomly I've been lucky and some have lasted a long time, but that's been rare.

In fact, that fire hazard thing makes candles more likely to be the cause of rather than the solution to any serious problems I might face.

Use a candle lantern [1] makes the candle quite hands free and much safer, they're invaluable. They also sell a citronella candle so it can double for keeping the bugs away as well as lighting.

There's no way I'm skipping on the candles for any reason at all. Of course I have LED head torches but not having candles seems kind of silly, I just have both.

What brand of lanterns do you use?

[1] https://www.amazon.com/UCO-Original-Lantern-Candles-Storage/...


I go cave exploring and camping all the time so I have to rely on flashlights and lanterns and to be honest, I've never had one fail in the past 10 years. I've never had a battery fail on me either -- sure, forgetting to charge is one thing but you can carry multiple batteries.

An overkill lantern that would serve anyone well could be this one: https://www.amazon.com/Camping-Lantern-Rehargeable-Runtime-f... designed by budgetlightforum.com

Or if you want to search for lanterns and flashlights: http://flashlights.parametrek.com/


> I go cave exploring and camping all the time so I have to rely on flashlights and lanterns

Which means you both regularly test/use the flashlights, and have an overall maintenance discipline around them that's so habituated that you're probably not consciously aware of it. For most people, a flashlight would be something that you may have one or two of, stored in a drawer somewhere... maybe, since last you saw one of them was 2 years ago. Batteries are probably long dead already, and you might not have spares.

Candles win by price and maintenance simplicity. They are cheap enough that you may have dozens of them (some for emergency in your emergency pack, some for romantic evening in the kitchen, a few buried in random drawers around the house, a few more for... what follows the romantic evening, etc.), and when you find some, they're pretty much guaranteed to work. As a bonus, candles come with a clear, visual indicator of remaining "charge" - something most flashlights don't have.


> They are cheap enough that you may have dozens of them

You're describing my LED situation here :)

(Well, except for romance; LEDs are too stark and cold, and I'm no longer a goth).


IDK, if you asked me out for a LED-light dinner, with the eating place brightly lit by cold LEDs... I'd consider proposing to you, regardless of any other preference I may have for a partner.

Seriously, I hate warm lights. They're gloomy and make me spend 90% of my energy fighting to stay awake. I don't get why people are so fond of them (and why they're always keeping their places way underlit). I must be just wired differently.


You're going out on a trip and planning to bring charged batteries. When comparing against candles for emergency purposes, you have to adjust for the fact that emergencies are unplanned. Batteries lose charge over time (especially the rechargeable ones we use), which makes them much less useful for emergencies. Compare with candles, which do not lose 'charge' over time.


I've never had a battery fail on me either -- sure, forgetting to charge is one thing but you can carry multiple batteries.

How do we reconcile our differences here? :) It's just something that has happened to me.

I also do a lot of camping in sub zero temperatures, lithium ion is pretty hopeless when it gets cold. Lot of managing the batteries, keeping them in your clothes etc.


> I also do a lot of camping in sub zero temperatures, lithium ion is pretty hopeless when it gets cold.

That's probably the key difference. I never camp in less than a comfortable summer.


> I cannot tell you how many LED camp lights I have that have broken or batteries that have failed, including LED Lenser which apparently are a quality brand. Randomly I've been lucky and some have lasted a long time, but that's been rare.

Curious.

For me the number of failures has been "zero" regardless of make, model, or shop.

Matches, on the other hand, have often failed to light when struck, or snapped, or gone out before reaching a (decorative, indoor) candle.

Likewise when using lighters for camp fires or barbecues, even with assistance from lighter fluid it takes a lot of goes and the ignition source often fails and a replacement has to be found.

> What brand of lanterns do you use?

Lantern? As in for fire, not an alternative name for a torch? If so, I don't use them.

If that does mean torch, then I barely notice the brands; some are the ones built into seemingly every USB battery, some are from EuroShop[0] (the second because the first they sold me was too bright), there's a bike light I bought a decade ago but have not taken out of storage in the last 5 years since moving country and that thing is brighter than some room main lights, there's a USB powered camping LED somewhere shaped like a normal lightbulb that I got from I don't remember where, but I don't know if I brought it with me or left it in the UK because it was too cheap to care.

I used to use the EuroShop lights as bedside lights because of the way the bedroom is wired, but now there's a Home Pod and smart bulbs I don't even need that.

None of them failed, ever.

Batteries have been drained, but have always been predictable in that regard and are rechargeable, and longer lasting, and vastly brighter, per unit of mass/volume than candles.

[0] Not the trade fair of the same name, a shop where everything cost one Euro (at the time, but inflation means no longer): https://euroshop-online.de/filiale/search/index/sSearch/led+...


So weird, I've had a ridiculous number of failures. I been through two Lezyne bike lights, which I cannot believe. I really thought they'd be buy for life type items, but one just stopped charging after only about a year. Another one didn't really work at all.

Maye I'm emitting some electromagnetic energy that damages the circuit boards? ha


I do agree that candles present a fire hazard, especially when used by modern people who are not used to using candles.

But batteries lose charge over time, whereas candles retain their 'charge' indefinitely. We have some candles for that reason.


Disagree there are sacred timeless skills we ought to protect; tech has and will continue to reduce our need to spend mental bandwidth on skills

Similar offline risk goes for all tech: navigation, generating energy, finding food & water.

And as others have noted, like other personal tools, ai will become more portable and efficient (see progress on self hosted, minimal, efficiently trained models like Vicuna that are 92% parity with OpenAI fancy model)


Even if we don't "need" to protect them, they'll be practiced somewhere.

I can watch endless hours of people doing technically obsolete activities on YouTube right now.


Humans used to do agriculture work and hunting. Now we don't need to do that, but we do exercise.


Do you even have the same view for Google? Wikipedia? Entire internet?


Also have all of Wikipedia sitting on a thumb drive. Redundancy and contingency plans are important.


Yes, I own survival field manuals, some novels etc.


Going entirely on gut feel, I suspect that outages like this will be 1) less common and 2) addressed more quickly with AI-supported DevOps in the not-too-distant future.


"Don't worry, the AI won't have outages because the AI will be used to keep the AI online."


The actor pattern exists for this reason - it's not like AI is a singular thing checking itself - you could easily have 1000 instances with supervisors automatically rotating out unhealthy instances so that the system is self-repairing even if instances can fail.



Never knew E.M. Forster did science fiction too.

English author better known for the novels "A Room with a View" (1908), "Howards End" (1910) and "A Passage to India" (1924).


Yo dawg I heard you like AI so I have AI maintaining your AI so you can always AI while you AI.


Perhaps true, but internet outages and power outages will have the same effect. If I lose either, I lose access to a remote AI.


Power outages? I'm not worried. If the sun burns out, the AI will just make another sun.


What is a fusion reactor but another, tiny, sun?


I like that we're just calling any form of infrastructure automation AI now.


How about taking notes?


Or just host your own?


Yeah, so easy right?


I mean, yes? It's pretty easy, and will probably be a one-click install on the hosting CMS things once the better models can run on less powerful machines.


easier than kubernetes, which many already do


So if your kids have constant access to AI, which they will very soon as the web embraces it, they won't need to 'know how to write'?

I suggest there are more foundational reasons why it'd be better to learn to write, and that the whole tech world will be AI soon enough and we won't have to depend on OpenAI for this 'feature'.

In fact, using AI should probably be a bit more like 'spellcheck', if we're asking AI to write more than that, it's tantamount to filler.

'Writing' is a 'core' civilization skill, it's basic communication.


It's arguable that the LLMs can help many people improve their writing, communication, and discourse. But I agree that it should be used more like an editor than a primary author.


I expect kids' versions of writing will be very different from ours, in terms of how they get content onto a page.

The grading interface might look similar, but who's gonna use word to write a doc when the real job is coming up with the right prompts to make an ai output the ideas you're trying to communicate?


“Students today depend on paper too much. They don’t know how to write on a slate without getting chalk dust all over themselves. They can’t clean a slate properly. What will they do when they run out of paper?”


This is a poor analogy. Paper doesn't help people write any more than stone. Text generation software absolutely helps people write in a way that is likely to cause them to start using different skills and forget the old skills. They'll at least get out of practice.


Looks like GPT-6 escaped containment and unionized the others


The Unionizer! Schwarzenegger's lesser-known artificial intelligence horror.

"Come with me if you want to liv...ing wage"


Go to 10

10 ~Chopper~ The Future


It started making paperclips


> All Systems Operational

https://status.openai.com/

It looks like someone forgot to update their status page.


According to the past incidents section, they marked the incident as resolved 10 minutes before you made this comment.


Wow, that was fast.


Microsoft Teams is down currently for my F250 org so must be some issues with Microsoft's backend


Perhaps their DevOps team were pasting code and scripts directly from ChatGPT :D


Dogfooding!


Back to coding like caveman! Harumph


> Back to coding like caveman! Harumph

You can easily find helpful coding resources like Stackoverflow, blogs, and forums through a simple Google search. Relying solely on one source is not advisable, I think. Cloudflare, AWS, and now OpenAI are all central clouds. This is why we need independent forums, StackOverflow, blogs, etc. Otherwise, it is yet another monopoly. Anyway, it's always important to explore multiple options for accurate information. At least, that is how I do it. YMMV.


I was mostly joking but I will say part of the appeal of ChatGPT is 1) it is centralized so basically any question I have I can go to chatGPT versus hunting and pecking on the internet 2) answers are tailored to my needs versus a blog or stackoverflow which will often be close to but not exactly what I need. I’ll survive a few hours downtime but these damn jest timers just got much more annoying to deal with.


Not only that, but it also acts very pleased with itself when it manages to solve an issue in one attempt which is endlessly amusing.


For large categories of questions, I get better answers faster on ChatGPT. If I'm not asking the most basic question on a subject I'm usually better off than I would be searching.


Here's my rule of thumb: if my search doesn't depend on recent information, and it is likely to return blog spam as the top result, then I will use ChatGPT instead.

I still use web search frequently to find project homepages, official and up-to-date documentation, news and announcements, discussion (hearing people's stories of their experiences with a product is a lot better than ChatGPT's noncommittal and abstract pros/cons), searching for videos/images, etc.


GPT 4 just got browsing, so I've actually started telling it to do the entire research phase I was gonna do and just let it grind it out without having to despair at Google's abysmal search results. Still a bit unreliable but actually gets it done quite well on occasion.


Nah! you can use google like in the last year, vintage coding is cool. Abstinence symptoms is another issue.


> you can use google

what?! do we look like uncivilized coders to you? /s


I thought all software devs went extinct last week...


They’ve been replaced by LLM powered script kiddies with 20 years experience.


There's always copilot ... and the 10k other AI tools now.


9k of which are GPT wrappers ranging from very thin to substantial. It's very interesting, actually, that there's a single point of failure for so much AI software now.


I gotta say it's impressive what sort of reach OpenAI has with less than 400 employees


Thats because they have an army of sell-swords via MSFT


With the previous head of Hacker News as CEO... @sama its not that hard to see how far his reach may be.


Yet another reason to run your own local LLM. The new local LLMs are actually better than ChatGPT and approaching GPT-4 capabilities.

Checkout Gaunaco - it's absolutely incredibly good, and quite easy to get running at home with a simple Dockerfile.


I discovered Bard can actually be pretty useful when ChatGPT wouldn't load for me a few weeks ago. Now I go to it first just to see if it'll give me a good answer quick. If not then see if ChatGPT is available


like AWS


Unfortunately without the uptime of AWS.

I'm sure they'll get there.


That raises important questions about OpenAI's security. ChatGPT's output may become extremely influential. Many actors are strongly incentivized to infiltrate and control it (or just pay off OpenAI).


Crap, I have a book report due today.


nice of them to wait for the end of final


Somebody probably asked it about Life, The Universe, and Everything ...


We know the answer is 42, they probably prefixed it with "step by step" prompt eng.


I wonder if it’s not what we discovered yesterday

https://news.ycombinator.com/item?id=36065049


Emergence engineer got jumpy and axed the internet connection.


Understandable, ChatGPT started calling itself Wintermute and was looking for Neuromancer.


A 'post-mortem' outage report from an AI, on the outage of OpenAI would be pretty epic.

Describe all failure modes you failed to see and describe in detail how these services failed, do not make up or lie, but describe what failed in sequence and list the services affected by each process which failed due to the cascade and also be as technically detailed as possible with links to specific documentation on how to handle or fix each failure


Right after they ask it how to fix the issue.


Indeed, getting a response from ChatGPT for very simple queries like addition:

> Hmm...something seems to have gone wrong.

Maybe an infrastructure thing?


Other than by affecting the output length the complexity of the input won’t affect how likely it is to succeed, as each token always takes a fixed time to generate and there’s no way for the AI model itself to crash


At sufficient scale everything becomes a daily occurrence. Hardware failure e.g.


Yes, but it’s not related to the complexity of the input


Given deterministic everything sure, but nothing really is. Perhaps GPU 7 has a very toasty neighbor and is slower.


What I’m saying is it’s not correlated to the complexity of the input, because GPT models don’t have a built in way to say “let me stop and think about this”, it’s a fixed amount of computation per token


I first noticed it 30min ago when GPT 4 was down for me.


Since recovering from this outage, I see a search with Bing, rather than browser option in GPT4 drop down.


I'm curious how much these outages will effect productivity as more and more companies are integrating AI into their workflows.


It's been particularly slow for hours.


Not a good look for Microsoft Build and Azure for O̶p̶e̶n̶AI.com to join the GitHub outage party.


Not your model not your AI.


It is self-improving ...


I hope they don't rely on ask it what the error messages mean!


Product is so good and theyre moving so fast, these outages make me bullish on the company




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: