Hacker Newsnew | past | comments | ask | show | jobs | submit | WanderPanda's commentslogin

I have a strong Tinnitus on one ear after an ear surgery for 8 years now. And I usually don‘t notice it for months at a time, even though it is there all the time (thanks for reminding me :p) So it’s not as bad as it might feel in the beginning. I‘m mostly bothered by my hearing being generally impaired by it. It sits at ~9kHz but it somehow still makes it significantly harder to comprehend voices.

how did you came with ~9kHz number? I want to know my own LoL

I had mine measured a few years ago. I had given some headphones in a quiet room, with a low frequency sinus wave on it. Whenever I said I could hear the tone clearly, they increased the frequency. At one point, the tone blends with my tinnitus tone, that's the whole magic. They then measure the current volume in dB, just by increasing/decreasing the tone's volume.

I'm on 12kHz, vacuum cleaner level.


Just search online for tinnitus tone generator tests.

I think this is the frontier when it comes to "unstructured":

https://youtu.be/nmEy1_75qHk

They for sure did not anticipate that the user would backflip into their robot and knock it (and himself) out :D


I didn't know Ruby Rhod's great grandfather was already alive.

How did I know this was gonna be speed lol

Theoretically, when the market offers me an order book and I take offers on one or the other side that should be totally fair? I think until execution/fill the information should be totally between me and the exchange and no one else, right? I get that if I send a limit order that can not be filled, that that affects the market because new information is introduced (before the trade) but in the previously described case all the information going out should be after the trade already happened, right?

Sure, if you want to cross the spread you can usually get a clean fill in exchange for a bit more cost. That said, a fair price is fairly synonymous with a midpoint fill, and if you have a proper execution route you can get the ask (smart algo peg orders for example).

There is a caveat though, which is that top-of-book liquidity is increasingly thin every year. It doesn't take that much size to hit the bid, take out the first thin onion layer of liquidity, and have the spread widen away from you. If you look at the live order book depth you will see that the top of book is often thin and flittering. The deeper liquidity will react to the top levels getting cleared before you can blink. (That's why if you have a non-small order and want the bid price, sweep the bid and go a few cents under, you will get a much more reliable fill and won't be left hanging with the liquidity instantly repositioned a sub-penny below you).


Alarm is a good example of an “output only” task. The more inputs that need to be processed the less a pure chatbot interface is good (think lunch bowl menus, shopping in general etc.)

Did they still not release "Bring your own subscription" "login with ChatGPT" and letting people apply their subscription/quota to other apps/services? There are so many use-cases where someone builds a usefull scaffold (e.g. even static website) that could benefit from integration with LLMs but rolling the authentication, having API-keys, free budgets etc. is scary. It would be much better if that could be outsourced to the LLM provider of the user.

It would bear a good lock-in effect for OAI as well


Thats like the opposite of what Open AI would really want though.

They want people walking up saying "we commit to spending $100k monthly for the next year let's discuss bulk discounts."


Neither Anthropic or OpenAI want to enable their fully self-service, automated, and scalable enterprise feature flags for only, say, 50 seats each burning $3k a month.

It's particularly annoying to see these feature flags exist in the UI; the features are automated, yet they won't let you select a plan above their "Team" level plans to use them.

(Over on the API side, they're more usage based, and easy to hit the highest tier from zero within about 10 weeks, but certain "talk with support because we probably have to edit a config file we don't let the riffraff see there's a setting for" features, like flipping off a safety flag so the LLM won't refuse to analyze a contract, are similarly gated. That seems more reasonable than gating infosec.)

On the end-user app side, table stakes data security features, some of which used to be included for lower level accounts then just got toggled off, are gated behind somewhere on the order of 150 seats at greater than Max usage instead of 25 or 50 seats.


Would be great if they provided at least some guidance how to keep this thing on topic. Even the official demo https://chatkit.world/ is not restricted, it happily chats about whatever.

Oh man this thing tells me my iPhone 13 mini isn’t wide enough to use it, that’s rough.

It doesn’t work on my iPhone 17 Pro Max either and when I turn it to landscape it’s buggy and doesn’t work. Great demo…

Doesn't work either on my Samsung s25

OpenAI wants all these services to pay per request. That is their entire business model. There is no incentive for them to fold everything into the same $20/mo subscription (which they are most definitely selling at a loss).

It might meaningfully change the business model of LLM businesses. Becomes seemingly much harder to universally charge $30/mo subscriptions when the user is bringing their own API key.

It was 4x over the original version IIRC so should be ~ 2x over the previous


Imagine regulators doing their job for once and creating a clean regulation that removes the uncertainty about the liability for such releases. Such that they can just slap Apache or MIT on it and call it a day and don't require to collect personal data to comply with the "acceptable use policy".


I think modularization of templates is really hard. Best thing I can think of is a cache e.g. for signatures. But then again this is basically what the mangling already does anyways in my understanding.


Not just signatures, you need the whole AST more or less.

This seems incredibly wasteful, but of course still marginal better than just #including code which is the alternative.


you know whats a glorified Markov chain? The Universe


For me it all made sense when I heard that IQ/g-factor basically vanishes in the absence of time pressure (heard if from Richard Haier on Lex).

For a very narrow range of professions, like ATCs, time is absolutely critical but for most it does not really matter that much. Especially in many STEM fields. I think people in a broad IQ range can build abstractions and acquire intuitions about pretty complex matter. From this view-point ability to concentrate for long times, curiosity etc. seem more important than "raw-compute".

"if you value intelligence above all other human qualities, you’re gonna have a bad time" - Ilya

Timeless statement imo, even in the absence of AI


> For me it all made sense when I heard that IQ/g-factor basically vanishes in the absence of time pressure (heard if from Richard Haier on Lex).

That cannot be true as there are valid IQ tests that doesn't have a time component, and people don't all score the same on those. He must have meant something different than you think.

For example Raven's matrices was originally an untimed test, how can that be if there is no G-factor in untimed tests?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: