Hacker Newsnew | past | comments | ask | show | jobs | submit | tmlee's commentslogin

Heroku works best for a company of one to offload most of the devops out

Otherwise, https://dokku.com has been really stable to launch your own heroku on cloud services like Digital Ocean or AWS

Gives the same experience as Heroku but with more flexibility


Dokku is GREAT. I've been using it for about 4 years now and it's rock solid. I love it.


That would be Kindle Unlimited model, as far as I know they pay authors by pages read.

But I’m unsure how impactful revenue wise for an author can make or how sustainable is it long term.

Most of the books I see on Unlimited are either older books (there are some good gems), books that are already freely found on the internet or books that are scrappily put together.


“...by pages read.” That actually sounds great. I don’t want to read a book, I want access to knowledge. I want a great search engine that will surface the pages containing the info I need so I can read them and move on and that is usually a handful of pages from a book.


This could have a terrible incentive for authors to spread the information thin across a ton of pages to artificially increase their earnings, so we'll end up with books that are the equivalent of online recipes today where there are 3 pages of SEO garbage before the actual recipe.


That both has and hasn't happened with Spotify.

On the one hand, there are no 10 hour songs with some great part spread out randomly throughout that listeners have to find (obvious, and I think such a book wouldn't be successful either).

On the other hand there are tons of generated crap on Spotify, than sometimes sneaks into your playlists (even things contained text-to-speech advertisement).

I think all such platforms would need some kind of spam filter functionality and a good rating / statistics system so that quality content could be surfaced.


Not raising or defer raising money can help establish the foundations of a viable long term business.

To start with, it does not feel right to have to raise money or give away equity in order to build the first MVP to look for product market fit.

Founding team should have all the skillsets required to launch the first version of the product (Tech + Marketing minimum). Tech founder who is able to code the product and has the know how to keep it running for as cheap as possible, and another founder who knows how to bring the product to market.

Iterate from there, and if lucky eventually slowly have a sustainable business. Along the journey, not having VC money trains the founders to make difficult choices and learn things the hard way. Ie. Founding team will do as much as they can before hiring someone else, optimize cost of operations rather than throwing money easily at problems, etc.

Founders who are in it for the long game and want to build a sustainable business, will find not raising money most attractive.


While there are many people here who are sharing their bad experience with MongoDB, just curious if you all find the experience with DynamoDB similar?

Since they are both of the NoSQL family.


DynamoDB is a lot more explicit about its tradeoffs. Much of the backlash against Mongo was because it basically claimed to be well-suited to any use case, when its sweet spot was really far narrower. To be successful with Mongo, you need to design the entire app around its limitations, but those limitations were initially downplayed and obscured.

People were convinced that Mongo was a good choice as a default, general purpose db, when it clearly wasn’t for about a million reasons.

I don’t think DynamoDB is marketed or viewed in the same way. The docs are pretty clear about needing to design your data model to specifically work well with Dynamo. People using it seem to generally be aware of its limitations, and deliberately choose to accept them for the sake of performance and scalability. At least that’s my perception.


> I don’t think DynamoDB is marketed or viewed in the same way. The docs are pretty clear about needing to design your data model to specifically work well with Dynamo.

More importantly, AWS is very explicit in letting newcomers be perfectly aware that DynamoDB's usecases are, that they are very specific niche use cases, and that if users require schemas and joins then they should just stick with either relational databases or graph databases.

Those who overpromise will always disappoint.


What are those good use-cases for mongo?


Basically when speed and horizontal scalability are very important, and consistency/durability are less important. It’s also pretty good for unstructured or irregularly structured data that’s hard to write a schema for.

Web scraping or data ingestion from apis might be a reasonable use case. Or maybe consumer apps/games where occasional data loss or inconsistency isn’t a big deal.

It can also be used effectively as a kind of durable cache (with a nice query language) in place of redis/memcached if you give it plenty of ram. While its guarantees or lack thereof aren’t great for a database, they’re pretty good for a cache.


I worked for a big news website- we used mongodb for pixel tracking data.

I went on to work at mongodb


DynamoDB is a completely different beast and would use no other data store unless I had to. It can pretty much handle any transactional workload I need.

It's cheap, its fast and it scales super high. Don't need much more


DynamoDB is consistent and scalable, not fast and cheap.

DDB is great for storing data that is required to be scalable and never needs to be joined. Add in DAX, developer time necessary to orchestrate transactions, calculate the scaling costs and...that's how AWS gets you.

Plus, local development requires half-complete emulators or a hosted database you're charged for.

No, maybe people should think twice about DynanoDB.


This came up in a thread a few days back, but people considering it should note that Dynamo’s transactions are limited to 25 rows. That may be enough for most operations, but I definitely wouldn’t say it can handle “any transactional workload”. I ran into this limit pretty quickly when trying it out.


I'm curious as well, since my company is moving from Postgres to DynamoDB.


Could you share your use case?


We are trying out https://www.hcaptcha.com/ in our application.

It's not FOSS, but seem to be a viable alternative to give a go. So far it does the job, though the images load a little bit slower than recaptcha


I'm not a big fan of hCAPTCHA at it's current form. The challenges seems so much harder than reCAPTCHA ones and I keep failing them. The images are just extremely low quality. Maybe I'm a bot.


I get easy challenges. Perhaps even a bit easier than the reCAPTCHA ones, and less of them for sure.


It's way worse than Google's for me. I am using Firefox and I don't even try anymore whenever get exposed to it on any Cloudflare website.


Founder of HCaptcha here. Have you tried our privacy pass or accessibility features?


The answer to "your basic service doesn't work well" should not be "well have you tried our additional services?"


I am scrolling through HN on my phone and saw privacy pass isn't supported for android. Is porting privacy pass to the android version of Firefox on your roadmap at all? If not, would you consider adding it to your roadmap? Thanks.


No, I haven't but your service is making it literally impossible to browse with Firefox and Tor (understandably because you're here to make money from your customers labeling people like me as a threat in their dashboards not to enable people to actually browse easily). Even reCAPTCHA doesn't do that.


I suspect you will see a considerable bounce rate once you switch. Pretty much the day Cloudflare flipped over half the web became 'captcha every 5 seconds' garbage.

Some sites that are the only source of what I'm looking for will be fine, but most I just bounce from now.


Every time when I asked for my copied ID to be watermark when checking-in at hotels, they always gave me a strange look - as if I do not trust their information security.


Would you elaborate how to do this?


Yeah, I would like some more details. Not sure I understood what exactly it means to "watermark" the ID. Is the goal to change it subtly to find out if it was leaked? Or is the goal to redact parts of it?


Smart idea, thanks.


No hardware can protect itself from absolute physical compromise; perhaps a self-fuse burner when somebody tries to open it


Banks store private keys for their ATMs in hardware security modules (HSM) and there are lots of crypto exchanges that started doing that. One of the features is private keys self destruct when tampering is detected. If you have a backup you’ll be able to recover the private key. While I agree that Trezor wasn’t designed with this in mind, I think it’s a good idea to include this feature. Not sure about the size requirements for that though, it might make the device significantly bigger.


True HSM with active self destruct needs to be constantly powered. On the other hand for many if not most applications, typical secure smart-card is is completely sufficient (and in fact typical POS card terminal stores most of it's long term secrets on SIM-like smart-card).


Somewhere in my junk parts bin is such a PCI card I bought out of a junk bin at Akihabara, that has Mitsubishi logo clearly printed with archaic construction overall, apparently marketed by NEC somehow, which product brochure page disappeared after I mentioned it on Twitter,

Had a pair of blown AA battery for self destruction. I never bothered to get it working, but IIRC it was supposed to detect removal from PCI slot(the proper) to self erase. So it’s not rare or difficult.


At this years RWC someone fuzzed the software on the HSM. Keys came out.


Thanks for sharing this, I had to google RWC. For others that don’t know the acronym: https://rwc.iacr.org


Size requirements shouldn't be intensive, assuming it's a single-shot system. All you need is 128-256 bits worth of secret key data that is physically-destructible (e.g. with a high voltage spike). You then encrypt/decrypt the rest of the secrets stored in the device with this destructible key.


Bigger may be better.

After all these devices are hard to use in part because if the tiny screens.

Since most of the time you don't carry them in your pocket it does not appear to be a problem if they are bigger.


Doesn't the iPhone claim to be able to protect from physical access?


Let's agree that protection against physical access is extremely difficult.


That’s what the US Department of Justice claims, at least.


With the right systems in place, you can be protected from physical compromise. For example, if my credit card is stolen, I call visa and I'm fine.


And who do you think foots the bill? You might not pay it in one lump sum, but I’m pretty sure you still pay it.


People who lost 100% of their coins are probably wishing they had the option to buy some kind of insurance. But no, be your own bank. (Wait, don't real banks also have insurance?)


Crypto is digital cash, not digital credit. If someone steals your physical wallet, you generally aren’t getting that cash back. Can we please dispense with this kind of hyperbolic nonsense?


You wouldn't carry large sums of cash on your person, so why are people considering large piles of cryptocurrency?


People actually carry large sums in many places where credit cards aren't prevalent. Like Japan.


The analogy isn't credit. It's a bank. I doubt people carry much of a percent of their total Yen wealth in their pockets.


The merchants who accepted the fraudulent credit card transactions don't get their money from Visa. So the merchants pay.


... and factor this into their pricing, completing the circle.



If you are into smart contract development on Ethereum.

A new language called Vyper as an alternative to Solidity has launched that said to solve some of Solidity's shortcoming

https://vyper.readthedocs.io/en/latest/

But it's so new that there aren't that many resources and community around it yet


When traffic in a city is inherently bad. It sets the foundation for difficulty in solving the problem with buses (increase frequency, priority express buses)

In a high traffic situation, ETA gets adjusted from time to time. And buses will struggle to get in/out of stops and delays loom when accident occurs on the road.

Poor road planning and building alongside with massive amount of cars on the road is one of the major issue.

Maybe workplace hours should be adjusted as a whole or varying tolls based on high demand hours to avoid everyone competing for road space.

But I generally agree with the article that buses gets you really far. For a long time Singapore was relying on bus routes to move people around the city state, and progressively build subway lines as the demand and city planning go with it.


Interestingly, Toronto has pretty terrible congestion. I haven't lived there in a while, but IIRC "rush hour" is more like 3 hours (morning an evening). And (apparently) the bus network still does a decent job. Not sure how true that is, but I hope they at least added dedicated bus lanes wherever possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: