Thanks Artem! That makes sense to specialize for the biggest customers. Yes, a lot of problems in the world would be improved by better search, including dating.
It must be heartening for a startup trying to build the best general search engine in the world to know that Google has absolutely no interest in competing with you.
Because Google makes money from ads, they're not actually optimized to build the best general search engine in the world, they're optimized to build the search engine that makes the most from ads, which is correlated with being a good search engine but not perfectly aligned. Our business model (paying directly for the search) incentivizes us to try to return the highest quality results, without any bias toward making money from ads. It also enables us to do things like pour a ton of compute/resources into a query to get the best possible results we can find, because someone would pay us a lot for that, and that's hard to do under an ads-based model.
Can you provide more information (or links) about that billing model you describe?
The incentive structure behind paying by the search has diminishing returns, as I see it. You need the results to be of a high enough quality to drive the user to want to run another search with you. Beyond that point, though, in the absence of a direct competitor, where is the incentive for you to continue improving search result quality? M
Yeah we'd love to make the product as accessible and cheap as possible, but as of state of AI costs of 2025, it's a very expensive product to run and so we have it login gated. If you're willing to log in though, you'll find a lot of the features that you're mentioning :)
Websets are cool - I remember that 2 decades ago there was a project in Google Labs that tried to return google search results as 'objects' x 'properties' but it never left their research sandbox (cannot remember project's name unfortunately).
Searches that give tabular results can be cheap if you already have structured datasets (extracted from crawled data), so LLM can simply convert the user's natural language query to SQL query (or SQL-like query) which can be cost-efficiently executed - say, with DuckDB. This approach can also give more correct results - as values in these structured datasets can be validated in the background, not as an individual 'deep research' task.
I understand that this is another kind of search service, however, this can be a way to offer free/cheap searches for users who don't need expensive individual research tasks.
Hm! I'll try this out. Sometimes info like price are hard to parse out because the data may be on ecommerce-style websites that have many crawling protections