Hacker Newsnew | past | comments | ask | show | jobs | submit | morningsam's commentslogin

>Typeless loosy goosy code that passes dictionaries all over the place is just not fun.

mypy --strict in CI & don't let dict[str, Any] pass review if the keys are constants, insist on a dataclass or at least a TypedDict.


> & don't let dict[str, Any] pass review

good luck with justifying that before your manager (reacting to complaints within your team, claiming you being a bottleneck).


So just make it a dict[str, str] and json encode the payload.


Looking through the options listed under "Non-Durable Settings", [1] I guess synchronous_commit = off fits the bill?

[1]: https://www.postgresql.org/docs/current/non-durability.html


Nope, Other commenter noted it:

https://www.postgresql.org/docs/current/runtime-config-wal.h...

Don't use synchronous_commit = off is durability ~= 0 (i.e. "I hope the write made it to disk")


In German, we use "aufrufen", which means "to call up" if you translate it fragment-by-fragment, and in pre-computer times would (as far as I know) only be understood as "to call somebody up by their name or nummer" (like a teacher asking a student to speak or get up) when used with a direct object (as it is for functions).

It's also separate from the verb for making a phone call, which would be "anrufen".


Interesting! Across the lake in Sweden we do use "anropa" for calling subprograms. I've never heard anyone in that context use "uppropa" which would be the direct translation of aufrufen.


Same in Dutch. “Oproepen” means “to summon”. We would use “aanroepen”.


That's assuming the AI owners would tolerate the subsistence farmers on their lands (it's obvious that in this scenario, all the land would be bought up by the AI owners eventually).


I wouldn't believe that any sort of economy or governmental system would actually survive any of this. Ford was right in that sense, without people with well-paying jobs, no one will buy the services of robots and AIs. The only thing that would help would be the massive redistribution of wealth through inheritance taxation and taxation on ownership itself. Plus UBI, though I'm fairly sceptical of what that would do to a society without purpose.


Seems like this was requested in 2021 and is currently in beta testing for select ecosystems only: https://github.com/dependabot/dependabot-core/issues/3651


>Though I have wondered about the idea of programming something like Dependabot, but telling it, hey, tell me about known CVEs and security releases, but otherwise, let things cook for 6 months before automatically building a PR for me to update.

Renovate can do both of these things already:

https://docs.renovatebot.com/configuration-options/#vulnerab...

https://docs.renovatebot.com/configuration-options/#minimumr...


There's a huge difference between believing someone will fail and hoping that they will.


>Spawning a PYTHON interpreter process might take 30 ms to 300 ms

Which is why, at least on Linux, Python's multiprocessing doesn't do that but fork()s the interpreter, which takes low-single-digit ms as well.


Even when the 'spawn' strategy is used (default on Windows, and can be chosen explicitly on Linux), the overhead can largely be avoided. (Why choose it on Linux? Apparently forking can cause problems if you also use threads.) Python imports can be deferred (`import` is a statement, not a compiler or pre-processor directive), and child processes (regardless of the creation strategy) name the main module as `__mp_main__` rather than `__main__`, allowing the programmer to distinguish. (Being able to distinguish is of course necessary here, to avoid making a fork bomb - since the top-level code runs automatically and `if __name__ == '__main__':` is normally top-level code.)

But also keep in mind that cleanup for a Python process also takes time, which is harder to trace.

Refs:

https://docs.python.org/3/library/multiprocessing.html#conte... https://stackoverflow.com/questions/72497140


I really wish Python had a way to annotate things you don't care about cleaning up. I don't know what the API would look like, but I imagine something like:

  l = list(cleanup=False)
  for i in range(1_000_000_000): l.append(i)
telling the runtime that we don't need to individually GC each of those tiny objects and just let the OS's process model free the whole thing at once.

Sure, close TCP connections before you kill the whole thing. I couldn't care less about most objects, though.


Tbh if you're optimizing python code you've already lost


On a 64-core machine, Python code that uses all the cores will be modestly faster than single-threaded C, even if all the inner loops are in Python. If you can move the inner loops to C, for example with Numpy, you can do much better still. (Python is still harder to get right than something like C or OCaml, of course, especially for larger programs, but often the smaller amount of code and quicker feedback loop can compensate for that.)


I strongly doubt this claim. Python is more than 64x slower than C without synchronization overhead in most numeric tasks, with synchronization overhead on those processes it should be much worse.

Python is so much slower than any native or JIT compiled language that it begets things like numpy in the first place.


My typical experience is about 40×.


Run along.


You'd presumably need to do something involving weakrefs, since it would be really bad if you told Python that the elements can be GCd at all (never mind whether it can be done all at once) but someone else had a reference.

Or completely rearchitect the language to have a model of automatic (in the C sense) allocation. I can't see that ever happening.


I don't think either of those are true. I'm not arguing against cleaning up objects during the normal runtime. What I'd like is something that would avoid GC'ing objects one-at-a-time at program shutdown.

I've had cases where it took Python like 30 seconds to exit after I'd slurped a large CSV with a zillion rows into RAM. At that time, I'd dreamed of a way to tell Python not to bother free()ing any of that, just exit() and let Linux unmap RAM all at once. If you think about it, there probably aren't that many resources you actually care about individually freeing on exit. I'm certain somewill will prove me wrong, but at a first pass, objects that don't define __del__ or __exit__ probably don't care how you destroy them.


Ah.

I imagine the problem is that `__del__` could be monkeypatched, so Python doesn't strictly know what needs custom finalization until that moment.

But if you have a concrete proposal, it's likely worth shopping around at https://discuss.python.org/c/ideas/6 or https://github.com/python/cpython/issues/ .


I might do that. It’s nothing I’ve thought about in depth, just an occasionally recurring idea that bugs me every now and then.


Never experienced this. If this is truly a problem, here is a sledgehammer, just beware it will not close your tcp connections gracefully: os.kill(os.getpid(), signal.SIGKILL).


There's already a global:

  import gc
  gc.disable()
So I imagine putting more in there to remove objects from the tracking.


That can go a long way, so long as you remember to manually GC the handful of things you do care about.


Is there a good way to add __del__() methods or to wrap Context Manager __enter__()/__exit__() methods around objects that never needed them because of the gc?

Hadn't seen this:

  import gc
  gc.disable()
Cython has __dealloc__() instead of __del__()?


Also, there's a recent proposal to add explicit resource management to JS: "JavaScript's New Superpower: Explicit Resource Management" https://news.ycombinator.com/item?id=44012227


And then we're back to manual memory management.

At least the objects get instantiated automatically, and you don't need to malloc() them into existence yourself; I guess that's still something.


> Which is why, at least on Linux, Python's multiprocessing doesn't do that but fork()s the interpreter

…which can also be a great source of subtle bugs if you're writing a cross-platform application.


>I grumpily ordered a replacement key for 15 euros.

A single key for 15€?! I remember ordering one from an online shop specialized in replacement laptop keys at some point in the 2010s and it was like 2€ total. Browsing through similar shops now, it seems like the minimum is 5€ per key nowadays, but still a far cry from 15€.

>After spending more than 100 euros on plastic keys, which would soon break again, I calculated that my keyboard had 90 keys and that replacing them all just once would cost me 1,350 euros.

Someone who breaks keys this often could just buy the whole keyboard assembly FRU for ~30-50€ and take spare keys out of that, assuming it's not always the same ones that break.


Even if it is, most of the keys are interchangeable. Your fingers don’t care that the M key has an N on it.


With my mechanical keyboard, I have had a tendency to break a few (specific) keys. I got around it by 3D printing a few blank keys.

So long as I know where the N and M keys are, that's all that matters :)


How are you people breaking so many keys?!


Maybe they learned to type on a mechanical typewriter...


For me, you’re not far off… it was an electric typewriter. So, the force I applied wasn’t directly linked to the force of hitting the paper, but it was … ahem … robust. Between that, the IBM Model M clone we had on the PC, or the membrane keyboard in our Atari 400, my early muscle memory might be skewed.

Now though, we’re talking mainly laptop keyboards. My desk keyboard is a low profile keyboard with pretty thin keys (Keytron K2). If you hit them at the right angle, there’s not much plastic there to absorb the shock.


As a lifelong Dvorak user, for whom the keys I press have never produced the letters printed on them, it amuses me that the "M" key is one of only two exceptions (the other is "A").


I used to be a Thinkpad die-hard (including both IBM and Lenovo), and as you say the full replacement keyboard was like $30US. And after replacing the keyboard, which I seemed to need to do every 3-ish years, it felt like a new laptop! Plus, the replacement would only take ~5-15 minutes.

Unlike my daughter's friend's Dell, where basically everything had to come out of the laptop to get at the keyboard (battery, speakers, motherboard, etc), AND it was plastic-riveted down. I must have spent 2-4 hours replacing it, because I had to do it twice (for reasons I don't remember).


I love Thinkpads and have a few, but this depends on a model.

Some Thinkpads, more specifically x2xx series (ie x250 - x290) require removing all internals to get to the keyboard to replace it (batteries, storage, wifi, motherboard, speakers, CMOS battery, and a few others). Dell Latitude E5470 on the other hand allows replacing its keyboard by pulling out a small plastic panel and removing one screw.


I was wondering what the catch was :-)

GPLv3 reimplementation linked in that thread: https://github.com/joeedh/pigment-painter


There is also spectral.js which is MIT: https://github.com/rvanwijnen/spectral.js


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: