I was annoyed by the default OpenAI ChatGPT slow-typing animation and logging me out every day, so I built a new UI client on top of its API.
It's a static web app, you can host it yourself, data is stored locally on your browser, API requests are made directly to OpenAI API without any middle server.
It has some more convenient features that make your that experience way better. Like search chat history, prompt library, integrations, etc.
Note for anybody else wondering, it's not open source, you have to download a binary. More FAQ by clicking the Gift (?) icon at the top and scrolling down.
I'm not defending anyone here but just strace the program if you're really worried about it. If you don't know what strace is: it exports every syscall the program has made, so you'd see if it's doing anything suspicious.
Well I don't see why not, also I don't see which issues if you're not using it as you couldn't encounter any to begin with. Being closed source is not an issue.
No. It's real. According to dang, showHN is suppose to be a "safe space". You could make absolute crap, everyone is suppose to be positive about it. Dang will flag people who don't do this.
Nothing dang can do though is hundreds of people hate it though. Things with actual security issues for example he'll likely just be quiet about it.
That is unfortunate. You would think that the intent behind posting ShowHN is to get constructive feedback but based on what you described, it appears that the expectation is to make those creators feel good instead.
It is open source. Your browser retrives all the JS and html sources. Even if it were somehow closed source, it doesn't really matter. You can view web requests and other, and see for yourself.
All logic is clientside in the downloaded JS and HTML. After the initial page load, it should send nothing to the backend of the website, only API calls to chatGPT API itself. There is no actual backend with logic, it only serves static HTML+JS+assets.
Theoretically, you should be able to block any outgoing connections to the website domain (after it loads all necessary HTML and JS), and it should still work fine. Saying "theoretically" because it might attempt delay-loading some static assets, but you get the idea. No idea how this specific website functions, as I haven't looked deep into it yet, but that's the principle behind "static websites" like this one.
The ability to reverse engineer the source through the browser doesn't make something qualify as open source anymore than reverse engineering a binary with some other tool.
I can view web requests but let's say on the 365th web request it sends my API key to an unknown location. How would I know if this was closed source?
> I was annoyed by the default OpenAI ChatGPT slow-typing animation
I’d assumed ChatGPT’s slow-typing was actually the model’s real-time output, streamed to the browser token-by-token. When I’ve made GPT3 API calls it’s worked like this, with streaming switched off I might need to wait ~20 seconds for a long response back.
I just tried your UI, and it’s so much faster! Was my assumption about real-time output wrong?
Great project, looking forward to using it more :)
ChatGPTs paid version is extremely fast, and this is using the paid API. The paid API feels slower than the paid UI, but this may be the difference between streaming results and getting everything in one payload.
I think the slow responses are the real generation rate when you time share the GPU. If you are willing to let GPUs occasionally sit idle though, you can probably get a faster generation rate by dedicating more of the GPU time to that one client.
Yep, I was surprised I about the response time too. I'm not sure what's going on under the hood, maybe OpenAI did something with the stream output that causes the slowness.
Perhaps it is simply a way to limit the number of queries? If you force people to wait 10s for an answer they aren't writing new questions all the time.
For me it's been consistently the case that after 2-3 messages, the animation would start to visibly lag, and eventually (on 3rd or 4th reply) slow to a halt, sometimes mid-sentence, completely bricking the session. It's been like this since ChatGPT was first opened to public. I initially assumed it's just the usual case of a bloated SPA, but now I think maybe I'm being throttled on purpose?
I know mate, haha. I imagine someone can do much better than I did in a few minutes, but yeah use the UI that already exists instead of sprinkling electron on something text based.
If anyone fancies a tinker I imagine future editions of a CLI AI having a --context to include former prompts*, --stfu|--concise to make it respond concisely (as I was originally just responding "I know mate, haha" I also need this flag adding), probably a --model= to override that bit too, maybe even a --preamble=./prompt.txt
... it definitely needs a --json flag to prompt and enforce "I only want json and I'll parse it out of the response myself if I need to"
* Not just keeping a buffer and re-sending it, I believe context asks GPT to summarise the conversation so far and then uses the output of that for context memory. It'd need some tinkering.
I'll get round to it myself eventually, for now being able to ask little things like "is it defence or defense" and "what company owns the coco pops brand" is doing me just fine
I did realise I can now potentially pipe things into and out of GPT too, but have yet to come up with a use other than outputting through TTS e.g.
I'm also building a personal ChatGPT client for a similar reason as the OP, which is to address the issues of buggy and slow UX, among others. As you mentioned, many people are currently working on projects using ChatGPT API. In fact, in the past 24 hours, around 12 new apps that have been showcased on OpenAI Discord channel alone. Some of these apps are open source, which makes them a great source of learning and inspiration for me.
It's typically in the form of access to features that are reserved for higher paying "enterprise" customers. Cloudflare's MVP program does this for example.
What is the tech stack for this? I really want to build small apps like this and have a diversified portfolio but don't know where to start. Are you still earning 3K/month from this and what kind of on-going marketing are you doing?
You might want to play around with the encoding on your videos. On a Windows machine with a i7-8770k (and dedicated GPU) with Firefox latest, the page was spiking up to 100% GPU usage, causing the browsing experience to slow to a crawl. It may be related due to me having a Twitch stream up on a second display.
Probably a Firefox bug, but it's preventing me from looking at this landing page that every one else seems to like :P
Wow this is interesting, you write "read like snapper" on the homepage. I made a screenshot tool for iOS which is actually called Snapper. I have been working on/maintaining it since 2014!
Yes definitely, over the years I've seen a few names with 'snap' as a base. It makes sense. I also tried to get snapper.com some times (although in my niche audience a website is not required at all). For the name, I think read it as ex-napper.