Hacker News new | past | comments | ask | show | jobs | submit login

I made the “mistake” in an interview of equating two super-quadratic solutions in an interview. What I meant was what Dawson meant. It doesn’t matter because they’re both too ridiculous to even discuss.





They’re too ridiculous… unless a more optimal solution does not exist

Absolutely not.

If the cost of doing something goes above quadratic, you shouldn't do it at all. Because essentially every customer interaction costs you more than the one before. You will never be able to come up with ways to cover that cost faster than it ramps. You are digging a hole, filling it with cash and lighting it on fire.

If you can't do something well you should consider not doing it at all. If you can only do it badly with no hope of ever correcting it, you should outsource it.


Chess engines faced worse than quadratic scaling and came out the other side…

Software operates in a crazy number of different domains with wildly different constraints.


I believe hinkley was commenting on things that are quadratic in the number of users. It doesn't sound like a chess engine would have that property.

They did make it sound like almost anything would necessarily have n scale with new users. That assumption is already questionnable

There's a bit of a "What Computational Complexity Taught Me About B2B SaaS" bias going.


All of modern Neural Network AI is based on GEMM which are O(n^2) algorithms. There are sub-cubic alternatives, but it's my understanding that the cache behavior of those variants mean they aren't practically faster when memory bound.

n is only rarely related to "customers". As long as n doesn't grow, the asymptotic complexity doesn't actually matter.


The GEMM is O(n^3) actually. Transformers are quadratic in the size of their context window.

I read that as a typo given the next sentence.

I’m on the fence about cubic time. I was mostly thinking of exponential and factorial problems. I think some very clever people can make cubic work despite my warnings. But most of us shouldn’t. General advice is to be ignored by masters when appropriate. That’s also the story arc of about half of kung fu movies.

Did chess solvers really progress much before there was a cubic approximation?


> I read that as a typo given the next sentence.

Thank you for the courtesy.

> I think some very clever people can make cubic work despite my warnings.

I think you're selling yourself short. You don't need to be that clever to make these algorithms work, you have all the tools necessary. Asymptotic analysis is helpful not just because it tells us a growth, but also because it limits that growth to being in _n_. If you're doing matmul and n is proportional to the size of the input matrix, then you know that if your matrix is constant then the matmul will always take the same time. It does not matter to you what the asymptotic complexity is, because you have a fixed n. In your program, it's O(1). As long as the runtime is sufficient, you know it will never change for the lifetime of the program.

There's absolutely no reason to be scared of that kind of work, it's not hard.


Right but back up at the top of the chain the assertion was that if n grows as your company does then IME you’re default dead. Because when the VC money runs out you can’t charge your customers enough to keep the lights on and also keep the customers.

I mean, the general advice is that if you actually understand what you're doing, then you don't need general advice.

That only matters when the constants are nontrivial and N has a potential to get big.

Not every app is a B2C product intending to grow to billions of users. If the costs start out as near-zero and are going to grow to still be negligible at 100% market share, who cares that it's _technically_ suboptimal? Sure, you could spend expensive developer-hours trying to find a better way of doing it, but YAGNI.


I just exited a B2B that discovered they invested in luxury features and the market tightened their belts by going with cheaper and simpler competitors. Their n wasn’t really that high but they sure tried their damnedest to make it cubic complexity. “Power” and “flexibility” outnumbered, “straightforward” and even “robust” but at least three to one in conversations. A lot of my favorite people saw there was no winning that conversation and noped out long before I did.

The devs voted with their feet and the customers with their wallets.


There are two many obvious exceptions to even start taking this seriously. If we all followed this advice, we would never even multiply matrices.

> no hope of ever correcting it

That's a pretty bold assumption.

Almost every startup that has succeeded was utterly unscalable at first in tons of technical and business ways. Then they fixed it as they scaled. Over-optimizing early has probably killed far more projects and companies than the opposite.


> That’s a pretty bold assumption.

That’s not a bold assumption it’s the predicate for this entire sidebar. The commenter at the top said some things can’t be done in quadratic time and have to be done anyway, and I took exception.

>> unless a more optimal solution does not exist

Dropping into the middle of a conversation and ignoring the context so you can treat the participants like they are confused or stupid is very bad manners. I’m not grumpy at you I’m grumpy that this is the eleventeenth time this has happened.

> Almost every startup

Almost every startup fails. Do you model your behavior on people who fail >90% of the time? Maybe you, and perhaps by extension we, need to reflect on that.

> Then we fixed it as we scaled

Yes, because you picked a problem that can be architected to run in reasonable time. You elected to do it later. You trusted that you could delay it and turned out to be right.

>> unless a more optimal solution does not exist

When the devs discover the entire premise is unsustainable or nobody knows how to make it sustainable after banging their heads against it, they quickly find someplace else to be and everyone wonders what went wrong. There was a table of ex employees who knew exactly what went wrong but it was impolitic to say. Don’t want the VCs to wake up.


Not all n's grow unbounded with the number of customers. If anything, having a reasonable upper bound for how high a n you have to support is the more common case - and you're going to need that with O(n) as well.

I feel this is too hardline and e.g. eliminates the useful things people do with SAT solvers.

The first SAT solver case that comes to mind is circuit layout, and then you have a k vs n problem. Because you don’t SAT solve per chip, you SAT solve per model and then amortize that cost across the first couple years’ sales. And they’re also “cheating” by copy pasting cores, which means the SAT problem is growing much more slowly than the number of gates per chip. Probably more like n^1/2 these days.

If SAT solvers suddenly got inordinately more expensive you’d use a human because they used to do this but the solver was better/cheaper.

Edit: checking my math, looks like in a 15 year period from around 2005 to 2020, AMD increased the number of cores by about 30x and the transistors per core by about 10x.


That's quite a contortion to avoid losing the argument!

"Oh well my algorithm isn't really O(N^2) because I'm going to print N copies of the answer!"

Absurd!


What I’m saying is that the gate count problem that is profitable is in m³ not n³. And as long as m < n^2/3 then you are n² despite applying a cubic time solution to m.

I would argue that this is essentially part of why Intel is flagging now. They had a model of ever increasing design costs that was offset by a steady inflation of sales quarter after quarter offsetting those costs. They introduced the “tick tock” model of biting off a major design every second cycle and small refinements in between, to keep the slope of the cost line below the slope of the sales line. Then they stumbled on that and now it’s tick tick tock and clearly TSM, AMD and possibly Apple (with TSM’s help) can now produce a better product for a lower cost per gate.

Doesn’t TSM’s library of existing circuit layouts constitute a substantial decrease in the complexity of laying out an entire chip? As grows you introduce more precalculated components that are dropped in, bringing the slope of the line down.

Meanwhile NVIDIA has an even better model where they spam gpu units like mad. What’s the doubling interval for gpu units?


Gaussian elimination (for square matrices) is O(n^3) arithmetic operations and it's one of the most important algorithms in any scientific domain.

I’ll allow that perhaps I should have said “cubic” instead of “quadratic” - there are much worse orders in the menagerie than n^3. But it’s a constraint we bang into over and over again. We use these systems because they’re cheaper than humans, yes? People are still trying to shave off hundredths of the exponent in matrix multiplication for instance. It makes the front page of HN every time someone makes a “breakthrough”.

So, how would you write a solver for tower of Hanoi then? Are you saying you wouldn't?

As a business? Would you try to sell a product that behaved like tower of Hanoi or walk away?



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: