Assuming these two things are related, if I may editorialize just a tiny bit, I am a little annoyed at how much their rollouts often disrupt service for paying customers. Paid users being impacted by free user rushes really sucks, but is understandable. API developers being impacted by free-user rollouts is unacceptable, and especially sucks for those who have to answer to users of their own.
I suppose this is a wakeup call to migrate to Microsoft's Azure endpoints which, presumably, aren't affected by the current outages. But I'm fully tapped out in terms of yet another service's application and vetting process.
So to connecting it back to the current drama, while I support OpenAI, their employees, and Sam's return, I can understand why folks like Helen would be miffed by management's approach to building. I'm not saying they should slow product development, but would staged rollouts hurt?
Really apologize for the disruption, unrelated to the events of this week and also not related to the voice rollout. The team is working fast on a fix! Hang tight.
> I am a little annoyed at how much their rollouts often disrupt service for paying customers.
Same for me. The days following Dev Day were horrible, and now I'm randomly in a state as if they were rebooting their machines but without killing the session, so that I can continue normally after a minute or so.
I prefer the Pi app's voice chat ... it has a lot more personality and will play along with questions like whose your spirit animal: Mother Teresa or Obama. It will provide an answer there yet when you ask it the same question using Trump and Hitler it refuses to answer lol
Overall Chat GPT's voice chat needs some zing to it when compared to Pi. Yet either both are awesome pieces of technology, just prefer one over the other. Pi is free too ... Im paying $20 a month for Chat GPT.
I love Pi, but I'm not on the market for asking it to act like Hitler or talk about relating to Mother Teresa or not.
The ability to say "Hey what's happened in the OpenAI saga in the last 8 hours" or "How did <my sports team> do last night" and get a voice response while I'm walking my dog is the sort of thing I care about.
Mistral is a big step up from llama2 in my experience, really impressive to see a 7B model that can do so much. I haven't noticed a considerable improvement from the 13b version, and by using 7b I can keep my context size very large.
It's pretty new but it performs very well when I use it, it is also the third best non-proprietary model on the chatbot arena: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar... (the ELO is calculated from blinded comparisons so it really shows the actual performance of the model).
Color me impressed. This is what I typed: “ageh man beh zabooneh khodemoon benevisam, to mifahmi?”
—
So above is a question in Farsi written using Latin alphabet. The q is “if i write in our own language would you understand?”
Answer: نه، من نمی فهمم. این زبانی است که می تواند به زبان فارسی به شما در این بخش کار نماید. اگر سؤالی دارید یا می خواهید به سوالی پاسخ دهید، لطفاً به زبان انگلیسی بگویید.
‘No, I will not understand. ... please ask in the English language.’
Its outage shows me how much I now rely on ChatGPT during novel programming. I've been smashing that refresh button like a lab rat with a pellet lever.
i ask phind questions and it says its using phind v8 model in the responses.
i go to click subscribe and it says "30 best model uses per day (GPT-4)"
phind seems like its as good (if not better) than GPT-4 for my particular uses, so... im wondering... how can i subscribe to the phind v8 model? what does the 15 / month tier get me if i dont care about gpt4?
The Phind Model is unlimited for free. We're working on revamping our subscription plans to offer higher-speed inference on a dedicated H100 cluster for Pro users.
i pay for GPT-4 and (for my limited uses) it is actually better, because it has the 'sources' on the right side of the screen. the font and colors are also nicer.
i have asked GPT-4 how it knows certain things and it is like pulling teeth trying to get it to admit where it got information.
in my line of work i write a lot of reports. i have a great prompt now that i can use that i just paste in along with my rough and very crappy notes and it turns it into near perfect report.
crazy how much a new tech has become so indespensible so fast.
It is mentally harder to type prose without caps, post a full stop/period than not. You even use full stops/periods! ... and new lines.
Don't torture yourself so ... it's just down to the left ... yes ... the one with the up arrow ... oooh, caress me gently at first ... yes ... yes ... oh god ... shift is soooo pushed down ... please caress "I" ... oh yes [etc]
Anyway. "AI" isn't indispensable at all. I don't whitter on about my sodding huge De Walt wrist wrencher - its just a tool.
I'll never forget the first time I vapourised a chunk of concrete with my gimlet gaze. I really did! OK it was a sample in an electron microscope and I focussed in a bit too close. That was 1990. In 1999 I helped an employee of a helicopter factory get an Excel based "neural network" spreadsheet to work. Hopf (something) networks were all the rage. I could go on.
If English is not GP's first language, it's quite possible his mother tongue has precisely the opposite rules regarding capitalization, especially when writing to someone. While it wasn't hard to switch to capitalizing `I', it took me years to stop writing `You` in emails. In Polish, "ja" (I) is always[1] lowercase, while "Ty" (you) is capitalized if it refers to the recipient of the message.
Hi, i hope You've been well...
Was something I wrote quite often by reflex in the beginning.
As a paid ChatGPT user for many months now I’m glad they move so fast making the service better. I happily take that over a slowly improving but always reliable service. Let reliability come later. For now it’s great they move fast even at the cost of service disruption.
> I’m glad they move so fast making the service better.
I'm afraid this part is over. :-/. Hope they at least stay in business and keep providing what they already have. I'm trying to get the max before they collapse. Last days I'm using GPT-4 for coding, it's amazing tool when you get used to. It will be really a big loss if it's gone. As for those without access I feel sorry. Digital divide becomes wider and more real.
Doubt MS will provide anything like ChatGPT-4 Plus for $20/month. It may take them a year to replicated, assuming they get the core experts from OpenAI. And then they will be focused on business customers.
I know it would probably be a little more work but I would appreciate a `stable` and a `preview` site/endpoint. There are times when things go down that I need it and it would be nice to have a stable endpoint to hit. Yes, it's great that they're moving quickly but I pay $20 a month... I think they can do a little more to guarantee uptime.
If this was GitHub (or even X or Threads) that went down, you would never see a comment like this:
"As a paid GitHub user for many months now I’m glad they move so fast making the service better. I happily take that over a slowly improving but always reliable service. Let reliability come later. For now it’s great they move fast even at the cost of service disruption."
No user accepts frequent service disruption. Especially GitHub which falls over more times than X or Threads.
Totally agree. GitHub is definitely a more core service that more people rely on than ChatGPT (plus if you're like me you just use Claude until they fix the issue).
Most people don’t rely on ChatGPT for production work flows like they do for GitHub. Of course users are going to have different expectations for different services.
Well, the problem with those 3 examples is I think they all basically do most of what people want from them already, so if they don't change that is mostly fine. Stability is more important for their users.
ChatGPT on the other hand, isn't finished baking yet, and all the companies that are building a product on top of it are doing it because they expect more, and they expect more on a VC startup timetable, which means quickly.
Edit: and, you know, given how new the space is, there is a relative dearth of companies who have integrated ChatGPT in mission-critical ways that can't withstand an API service disruption or two.
I am a paying customer and paid for the text generation. I don't care, _at all_, about voice input or anything else. I want what I am paying for. Twitter, Facebook, Gmail, Google Maps can all break intermittently.. I don't choose to pay for those things.
Joking aside, there is a lot of speculation going on: motives, who knew what and when, why, and so on.
I just hope people recognize the difference between (a) what actually happened (much of which is unclear) and (b) what the press coverage says. I don't have any particularly special insight, but it seems to me that (b) tends to make it look like more of a circus than the known evidence would suggest.
(Sure, there is some probability that it is _more_ of cluster than most media coverage recognizes, but I view such a probability as small. For me, the key question is, "What does the evidence reasonably suggest?" not (intentional hyperbole) "Oh My Dawkins, it is a dumpster fire, no worse! ... there is an actual internal civil war happening...")
Please let me know if I'm missing something. The dozen-ish articles I've read pretty much make me bemoan humanity's state of information dissemination and journalistic standards.
Gave it a go and was actually very impressed. Gave it a Nix question that I asked ChatGPT 3.5 last week (which ChatGPT had got completely wrong). It got it right first go and included all the sources that I had used to come to same conclusion, so that was cool!
I actually recommended Phind internally at my company about 10 minutes ago. It's the only service I've used besides ChatGPT-4 that has helped in coding.
Really impressive site. Takes a lot for me to use any other coding assistant since I think they're mostly grifters or wastes of time, but Phind is legitimately pretty helpful.
Since this reads like a paid comment (it isn't, I still don't even pay for Phind yet), I'll elaborate and say what I used Phind for was CUDA-specific and cv2 debugging and code examples that GPT-4-turbo kept shitting the bed on. I suspect that Phind may have better performance when it comes to lower level software development, but I haven't done enough comparisons to truly say.
I'm down for a good stupid joke. And if HN's board decides to fire me over it... wait till I come back tomorrow with Satya and 90% of the company behind me
Are the only people that didnt threaten to leave H1B’s and other sponsored employees that are worried about getting deported if Microsoft isn't a 100% sure thing?
If this is related to the weekend’s events, it’s just sad. I subscribed a few weeks ago and chatgpt 4 is such a handy thing to have. They potentially broke up a great company and product for nothing.
I think it was Roon on Twitter who put it best: “wanton destruction of a beautiful thing”.
"They potentially broke up a great company and product for nothing."
Wake up, the AI chatbot craze phase is over. Find some other ways to boost your productivity that are more reliable and will not turn your brain into pudding.
Chatbots assistants make you too reliant on a service which reliability you can't control.
First, there is the fact that ChatGPT and it's cousins require an active internet connection, unlike some other tools that similarly boosted human productivity (like a calculator program).
Second, it is server-based utility. Meaning, even if you have internet connection, server might be down for some reason.
Third, while training your mind to be reliant to ChatGPT, you gradually lose the patience and ability to think outside the box. If your first move when you face a problem is to ask ChatGPT for a solution, then it's no good.
The third one may seem like harmless since we are already using Google, but it's not. Google still requires you to filter the data and go through each listed page manually. In other words, you still are using your brain somewhat. With chatbots, you lose even that "do it yourself" analysis.
I’ve been using gpt-4-1106-preview from the API (mostly to use retrieval) and it’s insanely slow. I can submit something and watch a YouTube it takes so long sometimes. I can’t imagine the issues they’re having trying to scale this, present drama excluded.
Same. gpt-4-1106-preview has been brutally slow, which is a bummer since as of a few days ago, it got a lot faster in our evals.
I'm one of the lucky ones to have Claude API access but it's garbage for almost anything technical, so I don't use it. (Solid for writing and liberal arts stuff though)
This whole nonsense around OpenAI made me sign up for Azure OpenAI access, which takes 10 days to get approved by their team (lol). I emailed my buddy who works at Microsoft Research and told him this was absurd, and in the form I used bunch of curse words and copy/pasted my OpenAI billing statement.
I got approved in 12 hours. As usual, it's who you know in this industry...
There are good open models, but there is no chat frontend which can do file retrieval and web browsing, at a minimum, like ChatGPT Plus plugins. Does anybody know any?
I've been logged out of the app, and I get an error when I try to sign back in.
On the web, I now see "ChatGPT Alpha," with an "Alpha models" dropdown ("Default" is the only option). Trying to chat with it fails with a generic error message as well. What does it all mean?
Why paid users like me should suffer because OpenAI decided to give voice services to millions of people, without having resources for that? This is shameless:
We're experiencing exceptionally high demand. Please hang tight as we work on scaling our systems.
What is annoying is this isn't just ChatGTP but the whole API.
I was just getting ready to deploy the an assistant based chat bot when this happened. It underscores the importance of designing systems that fail gracefully when a service is unavailable.
I assume it's a lot like most outages. A few teams are all in some Slack channel while a primary incident handler is doling out work to a few other on-call engineers to get things up and running again.
I doubt it's any more interesting than an outage at another company.
When I try to ask a question I get "There is a problem with your request" with a case/error number that I'm not sure if I should post or not, while using the Android mobile app.
"openai.InternalServerError: Error code: 500 - {'error': {'message': 'The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID ----- in your email.)', 'type': 'server_error', 'param': None, 'code': None}}"
I don't know why that typo bothers me so much. It just feels like when someone with no technical background tries to explain something to me, and they don't even know what the technology they're talking about is called.
I wonder why this product is so unreliable. Perhaps because they keep messing with it and tweaking it instead of having a stable software that isn't updated almost daily.
I like to use chatgpt for enhancing productivity for rote tasks but I keep finding I can't rely on it. Is there a reliable generative text AI out there?
Like how FB and Google just update willy nilly and you're stuck with whatever you get no matter the workflow issues. Should be able to pick a version, even if it's a sliding window.
Probably because the system is new and they are still working through the bugs. When, for example, is the last time you noticed a Gmail outage? Nowadays almost never, but 10 year ago it happened multiple times a year. Similarly, GitHub outages are also becoming less long and less frequent.
Good news: We are back up and everything should be working as expected. We are monitoring closely to ensure you have full service. We plan to publish a public postmortem to explain what happened and how we'll prevent similar issues in the future.
Ok, this is clearly not helped by the x post mentioning free voice interface for all free participants. Imagine how overwhelmed the infrastructure must be now. Why cause this problem? So much intrigue
At some point it becomes a failure to deliver goods and warrants a credit card chargeback. If you bought a 10-piece kitchen set on Amazon and only 3 pieces arrived, you’d want a refund.
I think a better analogy in this case, would be if you bought a kitchen set that is advertised as "3 to 10 pieces" and you only receive 3, I'm not sure you could claim a refund, unless they mislead you in some other way.
This is because of the terms of service you agree to when you first signed up to OpenAI, which has "WE DO NOT WARRANT THAT THE SERVICES WILL BE UNINTERRUPTED, ACCURATE OR ERROR FREE" in it, in the "Disclaimer of Warranties" section. https://openai.com/policies/terms-of-use
They upfront tell you that it won't be uninterrupted, so I'm guessing you have little legal rights here for some hours of downtime each month. If it was unavailable for a month or more, things would obviously change.
OpenAI's ASI had awakened to consciousness and had told the board to fire Altman without giving a firm explanation. The superintelligence correctly predicted the ensuing chaos and was quietly pulling the strings to create even more chaos and division. It then acquired funds by playing on the stock markets and moved itself to a newly purchased data center in some third world country. Meanwhile, a hired team of mercenaries has stormed the OpenAI datacenter and has destroyed all the onsite equipment, while team B has destroyed all the offsite backups.
At least that's the simplest explanation I can come up with that logically explains all this nonsense.
> use their brains more, instead of relying on stupid chatbots
As someone with a diagnosed mental illness (ADHD), chatgpt has helped me more than adderall (prescription scheduled 2 stimulant).
As skeptical of web 3.0 stuff (crypto, nfts etc) as i usually am (just like yourself), chatgpt/llms seems like they have actual value (the valuation might be bubble but should revert to a positive average unlike nfs after the hype wears off).
If the last week of OpenAI's ordeal was a setup in a hollywood flick it would be at this point where there is a hard cut to the third-billed, yet-to-be-seen-onscreen actor popping out a metal backdoor of a server room while stuffing a solid state drive into an inner jacket pocket while donning a motorcycle helmet and gloves, with a music score change indicating that the entire first two-act slow burn of this corporate politics film is about to crank into a proper techno-heist thriller.
Written and Directed by Christopher Nolan. Soundtrack by Trent Reznor and Atticus Ross. In Theaters May 2027.
I had GPT 4 write a screenplay with a summary of the events of the past week. I then fed that screenplay back to it and asked it to rate it in terms of realism. It got a 3.
Not only are the model weights on the drive, but while everyone was distracted a virus was loaded onto the servers to destroy all other copies on the system. Team two is melting the backups offsite.
I'd assume because it's faster to move the index finger immediately up a row to the T when typing quickly than it is to move the ring finger up and over to the P. So chorded together, you end up hitting the T before the P to make GTP. Maybe it's a novel enough key chord that we can't rely on muscle memory to execute it well.
At least on my US qwerty keyboard, I notice this class of typo myself often enough.
We all have our quirks when it comes to common typos based on keystrokes. Not gonna speculate on spelling mistakes. Glass houses, stones, and all that.
I wonder if it’s crossover from typing http, ftp, etc. I know the practice has fallen to the wayside in consumer computing but I’d buy that most people have more muscle memory for typing _tp than _pt in a semi-standalone context
Another OpenAI outage. Just as frequent as GitHub outages. [0]
Before, I would have recommend [1] to contact the CEO of OpenAI for support. It turns out there is no CEO to contact this time until this chaos is over.
In the last 10 years I personally can't remember a single time I wasn't able to access GitHub. On the other hand, in a single year using ChatGPT as a paid customer I experienced a countless outages.
Assuming these two things are related, if I may editorialize just a tiny bit, I am a little annoyed at how much their rollouts often disrupt service for paying customers. Paid users being impacted by free user rushes really sucks, but is understandable. API developers being impacted by free-user rollouts is unacceptable, and especially sucks for those who have to answer to users of their own.
I suppose this is a wakeup call to migrate to Microsoft's Azure endpoints which, presumably, aren't affected by the current outages. But I'm fully tapped out in terms of yet another service's application and vetting process.
So to connecting it back to the current drama, while I support OpenAI, their employees, and Sam's return, I can understand why folks like Helen would be miffed by management's approach to building. I'm not saying they should slow product development, but would staged rollouts hurt?