Hacker Newsnew | past | comments | ask | show | jobs | submit | drag0s's commentslogin

nice! would exposing this as a tool for claude code improve performance when taking a deep breath?


Why connect it to Claude, when it already connects you to the universe through your breath?


Maybe the idea is to let Claude take a breath.


love this as someone who's been fixing the same billing bugs over and over and who sometimes finds stripe more complex than it should be. will make sure to try this on my next adventure.

btw, if you still want to go directly with stripe, here are some general recommendations/notes I generally agree with:

https://github.com/t3dotgg/stripe-recommendations


Thank you! We try and take case of most of these bugs and edge cases. I think the ones that have been most useful are:

1. Race conditions. There are some weird conditions to handle around if a user makes it back to your app post-payment before the webhook, or if they click twice on a purchase button accidentally.

2. Keeping usage reset cycles in sync with billing cycles. We had a bunch of weird cases to solve in February as it's a shorter month.

3. Handling annual plans that have monthly usage billing cycles. Or just handling anything to do with transitioning between annual and yearly billing.

Theo's approach is awesome and a very similar architecture to what we have.


one example where non-thinking matters would be latency-sensitive workflows, for example voice AI.


Correct, though pretty much anything end-user facing is latency-sensitive, voice is a tiny percentage. No one likes waiting, the involvement of an LLM doesn't change this from a user PoV.


I wonder if you can hide the latency, especially for voice?

What I have in mind is to start the voice response with a non-thinking model, say a sentence or two in a fraction of a second. That will take the voice model a few seconds to read out. In that time, you use a thinking model to start working on the next part of the response?

In a sense, very similar to how everyone knows to stall in an interview by starting with 'this is a very good question...', and using that time to think some more.


I'm still using Next.js in my work and projects because I still think it may be the best way to ship React to production, but it used to be something fun, enjoyable and productive. sometimes I feel a bit sad about the direction it's going in since the move from pages to the app router.


The best way to ship React to production is with Vite. It opens up tons of options (Tanstack, RR, Simple SPA, whatever) and you don't even bring the hosting provider into the discussion.


This - I spent quite some time fighting the new Next.js conventions not working for me making a legit web app instead of traditional site, switched to vite and was like yay, things work again and so fast. Normally I am all about embracing the framework, but kept thinking for what was happening I could use PHP instead and host anywhere.


Curious, anything specific that you'd highlight compared to a setup like Remix that make it easier to ship with Next?


RSC


Well, they have consistent naming for a start.


I love this! I think it would be even better to have a React Native SDK available and the ability to lock/unlock screen time via the API.


English sounds really great, congrats! other languages I've tried doesn't sound that good, you can hear a strong english accent


With Italian, it starts reading the text with an absolutely comical American accent, but then about 10-20 words in it gradually snaps into a natural Italian pronunciation and it sounds fantastic from that point on. Not sure what's going on behind the scenes, but it sounds like it starts with an en-us baseline and then somehow zones in on the one you specified. Using Alice.


the Italian example with mixed languages is especially bad: the Italian, German Japanese and Arabic all have very very heavy english accents.

The "dramatic movie scene" ends up being comical

I tried Greek and it started speaking nonsense in english

this needs a lot more work to be sold


The French one sounded like an Alabaman who took a semester of college French.

But the English sounds really good.


If you're trying to make an audiobook about an Alabaman visiting Paris this might be quite useful... But in seriousness try it with this voice: https://elevenlabs.io/app/voice-library?voiceId=rbFGGoDXFHtV...


I'll give it a check. I was playing the sample on the v3 page.


For Portuguese, interestingly enough one of the voices (Liam) has a Spanish accent. Also, the language flag is from Portugal, but the style is clearly Brazilian Portuguese.


Can you try with a voice that was trained on that language? This research preview is more variable based on the voice chosen


Swedish is just wholly American.


German sounds okay.


There's lots of great german voices here which should be better: https://elevenlabs.io/app/voice-library/collections/SHEPnUB9...

The voice selection matters a lot for this research preview


Not a native speaker by any stretch, but all the voices sounded like 'intercom announcer' or 'phone assistant' to me. Not natural in the slightest.


I tried German in the preview box there, and it had a very strong English accent.


I listened to a story about dragons.

It sounded okay. Only in the middle somewhere, the loudness seemed to change drastically.


nice!

it reminds me of this other similar project showcased here one month ago https://news.ycombinator.com/item?id=43280128 although yours looks better executed overall


one of the things I miss in iOS coming from Android is to be able to easily disable NFC or location :/


interesting use-case. if you carefully read the small font, you can still notice some artifacts though


I really like the game!

looks like the keyboard doesn’t work that well on mobile though (iOS). You need to press the keys below the key you want to type (eg if I want to type a T, I need to press G for it to work)


Ugh I've run into this issue as a developer before, IIRC it was something about the height of the bottom browser bar not being considered in touch targets. I think I was using the Canvas API when I hit it


I did not have this issue on iOS FWIW, so there must be something more subtle going on for this bug.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: