Hacker Newsnew | past | comments | ask | show | jobs | submit | more jdsleppy's commentslogin

Do what the sibling comment says or set DOCKER_HOST environment variable. Watch out, your local environment will be used in compose file interpolation!


I really like `DOCKER_HOST=ssh://... docker compose up -d`, what do you miss about Deployments?


HTMX has a very nice drag and drop extension I just found, though. And old-school forms can include image files. The little image preview can be a tiny "island of JS" if you have to have it.


> The little image preview can be a tiny "island of JS" if you have to have it.

I would consider that the bare acceptable minimum along an upload progress indicator.

But it can get a lot more complicated. What if you need to upload multiple images? What if you need to sort the images, add tags, etc? See for example the image uploading experience of sites like Unsplash or Flickr.

HTMX just ism't the right tool to solve this unless you're ready to accept a very rudimentary UX.


None of what you described requires anything more than an isolated island with some extra JS. No need for complex client-side state, no need for a SPA framework, no bundling required, not even TypeScript. If you relied on DOM and a couple of hidden fields, 90% of this would be a few dozen lines of code plus some JSDoc for type safety.


> No need for complex client-side state

Please implement a multi image upload widget and then come back to argue about this.


> along an upload progress indicator

I could be misremembering, but didn't browsers used to have this built in? Like there used to be a status bar that showed things like network activity (before we moved to a world where there is always network activity from all of the spying), upload progress, etc.

I don't remember if it was in Firefox, but SeaMonkey even has a "pull the plug" button to quickly go offline/online in the status bar.

Bizarre that "progress" is removing basic functionality and then paying legions of developers to re-add it in inconsistent ways everywhere.


Where do you suggest we sanitize values? Only in the client, when rendering them?


Depends on what you mean by sanitising.

If you mean filtering out undesirable parts of a document (e.g. disallowing <script> element or onclick attribute), that should normally be done on the server, before storage.

If instead you mean serialising, writing a value into a serialised document: then this should be done at the point you’re creating the serialised document. (That is, where you’re emitting the HTML.)

But the golden standard is not to generate serialised HTML manually, but to generate a DOM tree, and serialise that (though sadly it’s still a tad fraught because HTML syntax is such a mess; it works better in XML syntax).

This final point may be easier to describe by comparison to JSON: do you emit a JSON response by writing `{`, then writing `"some_key":`, then writing `[`, then writing `"\"hello\""` after carefully escaping the quotation marks, and so on? You can, but in practice it’s very rarely done. Rather, you create a JSON document, and then serialise it, e.g. with JSON.stringify inside a browser. In like manner, if you construct a proper DOM tree, you don’t need to worry about things like escaping.


What's wrong about filtering before saving, is that if you forget about one rule, you have to go back and re-filter already-saved data in the db (with some one-off script).

I think "normally" we should instead filter for XSS injections when we generate the DOM tree, or just before (such as passing backend data to the frontend, if that makes more sense).


Don't forget that different clients or view formats (apps, export to CSV, etc) all have their own sanitization requirements.

Sanitize at your boundaries. Data going to SQL? Apply SQL specific sanitization. Data going to Mongo? Same. HTML, JSON, markdown, CSV? Apply the view specific sanitizing on the way.

The key difference is that, if you deploy a JSON API that is view agnostic, that the client now needs to apply the sanitization. That's a requirement of an agnostic API.


Please don’t use the word sanitising for what you seem to be describing: it’s a term more commonly used to mean filtering out undesirable parts. Encoding for a particular serialised format is a completely different, and lossless, thing. You can call it escaping or encoding.


Sanitizing is just a form of encoding that prevents data from becoming executable unintentionally.


I don’t like how you’re categorising things. Sanitising is absolutely nothing to do with encoding. You can sanitise without encoding, you can encode without sanitising, or you can do both in sequence; and all of these combinations are reasonable and common, in different situations. And sanitising may operate on serialised HTML (risky), or on an HTML tree (both easier and safer).

Saying sanitising is a form of encoding is even less accurate than saying that a paint-mixing stick is a type of paint brush. You can mix paint without painting it, and you can paint without mixing it first.


They could always fall back to storing a value in a hidden element in the worst case. All/some/none selected is often done with an indeterminate state checkbox https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/... that can represent all three states.

Maybe I don't understand the problem you are talking about.


As soon as you need to store some state elsewhere you can store it in another suitable form (there's often some state not visually represented). I seem to recall jQuery stored state on a POJO (plain old JavaScript object) within a JavaScript variable of an element.


> They could always fall back to storing a value in a hidden element in the worst case.

This approach sounds like it's desperately trying to shove a square peg through a round hole. Why would anyone choose to use an element, hidden or not, to store a value as an alternative to use a very pedestrian JavaScript variable?


I have not started to use LLMs, so yes I still use search engines.


Yes, in that I have the time but need something meaningful to build with it. Let me know if you might need a partner.


Apparently the 150 year old Social Security recipients were due to a COBOL quirk where the zero datetime is 1875. Interesting, but not fraud.


That doesn't hold up because there are 200 year old people as well, and apparently millions of 100-year olds. I'm not saying all of it is fraud but incompetence leading to waste would not be surprising.


It sounds like there are several million in the db with those ages and without death records, but it also sounds like the vast majority are NOT collecting paychecks either. Also unclear how many of those who are collecting involve money going to living spouses or whatever other rules exist.

https://xcancel.com/ThatsMauvelous/status/189135619250239902...


Too late to edit, but I've learned this is not necessarily true (but could be a default date used by the SS code in particular). Sorry for spreading rumors.


> COBOL quirk where the zero datetime is 1875

This is a piece of misinformation coming from Twitter/X post.


20 may 1875 is the reference date of ISO 8601:2004.

You don't need to trust me on that, you can just go check the standard.

Or you can claim that it's misinformation too. Up to you.


Wouldn't you pause for a moment to consider how a 2004 standard is relevant to a COBOL codebase that is probably more than 50 years old at this point?


I'm not sure everything in there is 50 years old. Not even that everything is actually in COBOL. Those gigantic beasts tend to eventually be quarantined into some VM, never touched again, and then somebody puts some modern-ish wrappers around. For example some HTTP JSON API endpoints to query things. And what do they do when a date is missing? Not returning one would surely make sense. But I'd also expect layers and layers of abstractions in between, maybe some libs to transform some data type in one representation into another. Somewhere on the way, this date as a default value could easily slip in. It's not entirely made up, it's in an ISO standard. Maybe the lib was strictly following that standard.

It's not hard to imagine that something like this actually happened. Dismissing it outright just because COBOL does not have a datetime data type and the standard is only 21 years old (that far pre-dates node.js btw) could be playing into the hands of the Muskians who surely love any possibility to get out of this BS in case they made a mistake. Would not be the first they made nor the first they handled that way.


There is no cluster at 150 in the underlying data though, there's even distribution among unrealistically high age ranges. This is yet another case of people taking partisan telephone game conjecture literally.


> There is no cluster at 150 in the underlying data though, there's even distribution among unrealistically high age ranges.

Is that so? How do you know? Afaik that data is confidential and access is highly restricted. Or are you saying that that's what Elon said and we should take his word for it?

> This is yet another case of people taking partisan telephone game conjecture literally.

I don't follow. Could you please explain?


Working the backhoe requires more skill than shoveling and can command a higher wage. You want to more carefully vet and care for the person driving your expensive equipment. Also you had an engineer to design the backhoe and factory workers to assemble it who are getting paid during this process. It's possible that the net wage per hole dug goes up as a result.



...but with manually running autoinstrumentation in the post fork hook.

I guess there is a lot of undocumented magic in OTel...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: