Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What pricing would feel right to you?


I'm far from an expert on such things, but I like the adjustable fixed-price model (like digital ocean).

Whenever I see a price like 0.0002$ per hour, I feel like they're trying to mess with my intuition.

That's especially applicable to data-science, because you're not worried so much about automatic scaling, you just don't want to be surprised at the end of the month.

I don't know if you can match digital ocean's prices (they offer managed postgres, for a fair comparison), but if you can get close then you have a chance.


I was thinking something based on size or performance.

1 req for second, free.

50 req for second, 5€.

Unmetered, 30€.

Maybe not counting for seconds but for hour or day so to accommodate burst.

Another option would be on features. But that will requires time.


If you have full support for SQL, it means some queries can run for a few seconds, possibly even minutes (even sqlite supports some form of bfs). So will you just time-out those requests? That would make a lot of possible uses suddenly impossible.

I might consider the unmetered option, if the performance gain is worth the 6x cost of other providers.

For web and games, maybe your existing model could work, but then always having a recent backup also becomes more important (and for most sites, the lack of acid is a deal-breaker).

The nice thing about data-science is that you interact with the workspace yourself, so you know exactly when you want to save a snapshot.


Hummm, what you mean by data science?

Like running analysis in a Jupiter notebook?

Indeed I am not sure this case is a good fit...


Yes, that's what I meant. And possibly for ML preprocessing.

Just out of curiosity, who do you imagine your users will be?


The problem with data science is that usually you have relatively big datasets, you care more about throughput than latency and you work in secure an environment where you can definitely have access to the database credentials.

Streaming over the network the result of a big select is not ideal, moreover I believe that data scientist prefer to work with common technologies. I mean that there are already adapter for SQLite or PG or MySQL, while for RediSQL it won't be as straightforward.

I am thinking to developers for the JAM stack would be interested to this sort of API. Or people that want a database without having to think too much about it.


Well, many choose to do their data-science over aws and similar, so I'm not sure there's a big different. I see you point about throughput and network load, but part of DS is data analysis, where the work is mostly exploratory: finding connections in existing data, and working heavily with aggregated data and previews, rather than just using it as a pipeline for other systems.

I think for most JAMs the network is a bigger hindrance than the query time. So, I hope you know what you're doing.

Anyway, cool project. I'll make sure to check back in a while and see where it went. I'm working on an "adapter" (so-to-speak) that queries SQL, so maybe I'll add yours too when the time is right.


The advantages of using an API like these on JAM it would be that very sophisticated application could be written completely client side. Which is quite interesting IMHO.

What is your project?


My project is an interpreted query language that compiles to SQL (with support for several backends).

It is more capable than ORMs, and provides a layer of abstraction that SQL direly needs but lacks, as well as a shorter syntax, that is in-line with other popular languages.

Here is a very early version of it: https://github.com/erezsh/preql

I've kept working on it, but privately, and I'm trying to make it into a product.

I will probably release it as open-source when it's ready. I still need to figure out the right license, financial model, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: