Hacker Newsnew | past | comments | ask | show | jobs | submit | virtue3's commentslogin

thanks for the immich suggestion.

How have you liked Obsidian? I was going to use it but realized it pay walled sharing notes between devices. Looking into this again - are you self hosting Obsidian via LiveSync plugin?

Thanks!


I'm self-hosting Obsidian sync. I mostly followed the tutorial here: https://www.reddit.com/r/selfhosted/comments/1eo7knj/guide_o...

Except I wanted more security and multiple users. Instead of using the default admin user, I created one user for each person. Done in the "_users" database. Then create one database for each person. Assign each user as a "Member" to their respective database, not admin. Now each person has their own credentials that can access only their database.


Not OP but I'm a very happy longtime Obsidian user. Maybe a little unfair to characterize as "paywalled" given their sync service is optional (and works very well), and a directory of markdown files is about as portable and flexible as it gets for self-hosting.

You can simply put your vault in a cloud folder if you don’t want to pay for Livesync

USA was providing Ukrainian operatives Russian officer locations via soldier's using their cellphones

https://oe.tradoc.army.mil/product/smart-phones-playing-prom...


Advertising isn’t “individual liberty” — it’s paid psychological manipulation.

Banning gambling ads isn’t banning gambling. It’s just stopping corporations from pushing addictive behavior on people who didn’t consent to see it.

We banned cigarette ads for the same reason — harm and addiction.

Limiting corporate ad power protects individual liberty. I can choose to gamble if I want, but I shouldn’t have to fight off brainwashing every time I watch a game.


It's both liberty and manipulation, simultaneously.

I don't think we can build anything good in the long run if we imagine there's a clear and obvious line between them.


I worked at crunchyroll.

Keeping the "hard subs" content is a lot of videos as the subtitles were encoded into the video stream.

This makes CDNs and other systems more difficult to utilize because we have a ton of video streams with just caption changes as opposed to just the Japanese audio source + caption files.

It's one of those things that doesn't seem that problematic till you include all the video_qualities to support streaming bandwith. So you also get a #hardSubLanguages * #videoQualities


Obviously you probably thought about it but what about rendering the subtitles on top of the video stream? Was there a reason it was not possible (e.g DRMs?)

This kind of softsubbing is what Crunchyroll primarily does, but it has hardsubbed encodes for devices that cannot do softsubbed rendering of the ASS subtitles that Crunchyroll uses. I go over some ways in how they could do away with these hardsubbed variants in the article without any notable loss in primary experience quality.

They could borrow a trick from Netflix mentioned elsewhere in this thread: https://netflixsubs.app/docs/netflix/features/imgsub

I’m pretty sure it’s not too hard to implement an ASS → PNG renderer (especially considering vibe coding is now a thing). Then, just need to split out subs that can be actual text somehow from the ones that have to be overlays.

Apart from that... surely they could at least keep ASS subs for the players that support it, and serve “fallback” subs for low-end devices?


ASS can have frame-by-frame animation IIRC, so a stream of PNGs could end up being quite high bitrate with high complexity files

It can, but that doesn't mean they use that functionality.

It is harder than you think and will break on many more devices than you think.

So you make the business decision to stop supporting weird devices that can't do the job right? Why on earth does a cartoon streaming site need provably-correct subtitle support for devices that clearly suck?

Because the owners of those devices are paying them.

I’ve mentioned it elsewhere, but... why not keep proper ASS subs and fallback subs for those devices?

If you hardsub the video, then you need to have a full copy of the video for every language. That's the opposite of what people want. They want a single textless video source that can then accommodate any internationalization.

The article claims that you can slice up the video and only use language-specific hardsubs for parts that need it. I'd be interested if there are technical reasons that can't be done.

To be more specific, basically all online streaming today is based around the concept of segmented video (where the video is already split into regular X-second chunks). If you only hardsubbed the typesetting while keeping the dialogue softsubbed (which could then be offered in a simpler subtitle format where necessary), you would only need to have multiple copies of the segments that actually feature typesetting. Then you would just construct multiple playlists that use the correct segment variants, and you could make this work basically everywhere.

You can also use the same kind of segment-based playlist approach on Blu-ray if you wanted to, though theoretically you should be able to use the Blu-ray Picture-in-Picture feature to store the typesetting in a separate partially transparent video stream entirely that is then overlaid on top of the clean video during playback.


Technically it's possible.

We did do inlaid server-side ads that way for a while.

IT just takes an excessive amount of work.

The real solution is just the full support of ASS/TTML/VTT subtitles on all platforms. Usually smart devices are kind of only partially supported.

For instance - casting to a chromecast fallsback to SRT.


It's incredibly fragile at the CDN level if deployed at scale for a start.

You'd see playback issues go up by 1000%.

In the nicest possible way, it is pretty clear that this article was written by somebody who has only ever looked at video distribution as a hobbyist and not deploying it at scale to paying customers who quite reasonably get very upset at things not working reliably.


What would be the problems? When I’ve looked into streaming video before (for normal, non ripping reasons), I’ve noticed that most are already playlists of segments. You’d just need to store the segments that are different between versions, which should be better than keeping full separate versions which is what they apparently do currently.

This is just an excuse. There needs to be a hard english sub and then keep other languages can be single video with different text file. Deleting 80% good things only to keep other 20% happy should not be an excuse.

Only english is the most popular and just keep it. Most of the good hard subs are made for english and that is what people want.


That is exactly what I thought and I am not even a native English speaker. My English is infinitely better than my Japanese though, so if I cared about anime I’d much rather watch a good English version rather than a bad German one

Guess how well supported soft subs are on smartTVs etc? :)

It's really tough when you need to scale these things across 20 platforms.


I think about this a lot. MVC was the gold standard of laying out frontends at some point.

I don't think it really bought us much.

I think the selling point on react was the composability of models and views and controllers.

If you can make the code and structure simple and cohesive enough that they can all flow together it works well.

In general if you have something like

DataFetchComponent (Ideally View-Model data from graphql etc - not a pure model) -> ViewControllerComponent (ViewController) -> can trigger dataFetch

you end up with a really really elegant solution.

Of course deadlines and getting shit done ASAP tends to mess this up.


I still have nightmares about nokogiri gem installs from back in the day :/

Shudder. I'm guessing it was the always breaking libxml2 compilation step right?

Having worked a long time with client teams as a lead - this is always the biggest pain in the ass.

At one of my last phase startups I started shifting all our business logic stuff into our graphql server and treated it like it was part of the client teams. (we had ios/android/web as independent full apps with a very small team of devs).

Business logic is the real killer. Have one person suck it up and do it in typescript (sorry y'all) on the GQL/apollo server and all the clients can ingest it easy.

Send down viewmodels to the clients not data models. etc etc.

This helped DRAMATICALLY with complexity in the clients.


So your takeaway is that business logic should be done on the server. Hasn't it always been like this?


Has been the true core vision of the web if not mobile. If I remember correctly, that is how the original Basecamp was implemented, or Craigslist (still is?)... or this very website


Also I just realized the irony of them using GraphQL. They've really come full circle


OP is likely talking about local business logic, ie password field is min 3 chars long. You validate that in the FE before sending it up to get instant feedback to the user.


Your API should be fast enough and hosted on the edge so that the server side validation is instant feedback


There are external factors apart from your own API that can impact latency, for example a user could be in an area of poor internet connection or have a slow connection. Users do not live in our perfect development environment bubble where everything just works, it’s important not to assume that.


If it takes 1 second for a small percent of users to get form validation back it won't impact the business


That's how we got to "download 50 GB before playing a game on a console is fine", feels like we just stopped carying. Sending the form to the BE just to do same basic validation adds so much latency to the UI that it feel unusable for many/most users.

Related: A few Sundays ago I wanted to play Anno again. Sadly it was not installed on the Laptop I used. So i started downloding it because you won't get it on DVD/as iso-file today). Now it's a few Sundays after and I didn't play yet - because the download took 7 hours.


That's such a ridiculous logical falacy. You already have to send the form input to submit the form in the first place and you already need server side validation.

I just checked one of my app's register page (which makes > $2M ARR). If you submit a short password it returns an error from the backend that says "Password should be at least 6 characters.". (It uses Supabase). But yeah, that is so unusable it is basically the same as taking 7 hours to download a game onto your Playstation. Great logic!


What if you want your app to work offline?


And not just off-line, but as we learned last week, if us–east-1 is down you have spotty connectivity, not hard down, and your device needs to not cook your users; literally in the case of Sleep8.


We've really hit a strange level of dystopia when your bed doesn't work because a server is down


It was a near real-time messaging application. So not really applicable (other than seeing messages you already received - which could be cached from previous sessions).


I guess I'm not clear on what you mean about putting business logic in the client. It can't only be on the client side. If you do so, then obviously you have to replicate it on the server to check that the client was sending the right results, no? Not to mention avoiding thread races and double inserts and whatever else may have gone stale on the server before you allow a client to validate something? Even if your code isn't public-facing, the server still needs to check everything. As a solo dev it seems insane to me to ever put business logic in the client, unless the client and server literally share the same typescript codebase for crosschecking ops, and even then the server needs the same code plus additional safeguards. It baffles me that anyone would write a platform from the ground up with primary business logic on the client side, if the server isn't written in the same language. Maybe some simple initial validations and checks to avoid bombarding the server, but the server has to be the central source of truth.


That’s the exact opposite of what the GP is suggesting. Read this again:

> Business logic is the real killer. Have one person suck it up and do it in typescript (sorry y'all) on the GQL/apollo server and all the clients can ingest it easy.

Move the logic to the GQL retriever so that clients don’t have to implement business logic.


Yeah, I understood what they said. I'm wondering why the previous owners of the code decided to put business logic in the client.


we had a lot of very talented iOS devs that started running away with feature development when the server team couldn't keep up.

This really hurt the android + web client teams.

Eventually our backend started changing (mono-rail -> microservices) and it turned into an absolute cluster of trying to massage/cram new data models into the existing ones the iOS team created.

Late stage startup and then post finding product market fit problems.


OP is likely talking about local business logic, ie password field is min 3 chars long. You validate that in the FE before sending it up to get instant feedback to the user (yes you also have it on the server).


Apple doesn't require you to pay a significant portion of you paycheck to them either.


Unless you're a developer


I’ve used Ethernet over coax in my current apartment.

It’s worked well!

You do need to be a bit careful as coax signal can be shared with neighbors and others sometimes.


You can isolate your ethernet over coax from your neighbor with a MoCA POE "point of entry" filter which blocks the frequencies used by MoCA.

You can buy them online for around $10 and they install without tools,

Besides neighbors, you may also need a POE filter if you have certain types of cable modem.


cable companies require poe filters. if they find that there is some "noise" leaking from your house, they may put a big filter of their own outside, that can degrade speed of modem


I used digital ocean for hosting a wordpress blog.

It got attacked pretty regularly.

I would never host an open server from my own home network for sure.

This is the main value add I see in cloud deployments -> os patching, security, trivial stuff I don't want to have to deal with on the regular but it's super important.


Wordpress is just low-hanging fruit for attackers. Ideally the default behavior should be to expose /wp-admin on a completely separate network, behind a VPN, but no one does that, so you have to run fail2ban or similar to stop the flood of /wp-admin/admin.php requests in your logs, and deal with Wordpress CVEs and updates.

More ideal: don't run Wordpress. A static site doesn't execute code on your server and can't be used as an attack vector. They are also perfectly cacheable via your CDN of choice (Cloudflare, whatever).


A static site does run on a web server.


Yes, but the web server is just reading files from disk and not invoking an application server. So if you keep your web server up to date, you are at a much lesser risk than if you would also have to keep your application + programming environment secure.


That really depends on the web server, and the web app you'd otherwise be writing. If it's a shitty static web server, than a JVM or BEAM based web app might be safer actually.


Uh, yeah, I thought about Nginx or Apache and would expect them to be more secure then your average self-written application.


a static site is served by a webserver, but the software to generate it runs elsewhere.


Yes. And a web server has an attack surface, no?


I think it’s reasonable to understand that nginx/caddy serving static files (or better yet a public s3 bucket doing so) is way, way less of a risk than a dynamic application.


Of course, that’s true for those web servers. If kept up to date. If not, the attack surface is actually huge because exploits are well known.


What are these huge attack surfaces that you are talking about? Any links?


The thing with WordPress is that it increases the attack area using shitty plugins. If I have a WP site, I change wp-config.php with this line:

    define( 'DISALLOW_FILE_EDIT', true );
This one config will save you lot of headaches. It will disable any theme/plugin changes from the admin dashboard and ensures that no one can write to the codebase directly unless you have access to the actual server.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: