Nobody here has addressed the actual issue of Blockchain in a cashless society. You need internet for using crypto, your internet company accepts only legal currency (card, cash, etc) and since there's no chash and you have been cut off from using cards you can't pay for your internet bill and that means it'll eventually get cut and you won't be able to use crypto. Crypto needs cash to work, because people need cash or card for utilities, those companies won't accept your make believe money like the little shop in the corner does.
I've worked for close to a decade with Django's ORM and recently with TypeORM, both have been simple to use and generally (minus some weird things I tried to do) a pleasure. I recently started working with a not so important project at my current job with sqlalchemy because I had to use Flask, I cannot describe the pain I've felt working with this, I dread having to write yet another query with sqlalchemy. The docs are terrible, some of the worst I've ever seen, an obscure and undocumented government library I'm also using at work has been easier to learn by reading its code than sqlalchemy. Someone above mentioned that the new 2.0 docs are better, I seriously hope they are, they will make my suffering more tolerable. However, even if the docs are good the API is still the worst and least intuitive I've ever seen, it honestly feels like I'm writing raw sql code shaped like python code. But it is weird, non standard and difficult to follow, unlike the official python sql interface. I recently had to write an upset and it felt like I was trying to summon a forgotten demon. It would have honestly been much easier to just write it in raw sql.
I'm sorry for the wall of text, I wanted to let you know that I'm very grateful for you suggesting that library. I hope I can migrate this (still) relatively small codebase away from sqlalchemy. I'm going to give pugsql and peewee a try, both have been mentioned in this thread as good alternatives.
I can summarize my basic complain here: the abstraction layer that sqlalchemy provides is more complex than SQL itself. It's almost not worth it, add to that the docs are a huge mess and you get something only diehards and people who have been using it for decades want to use.
> Someone above mentioned that the new 2.0 docs are better, I seriously hope they are, they will make my suffering more tolerable. However, even if the docs are good the API is still the worst and least intuitive I've ever seen
Thats Exactly my experience using SqlAlchemy on previous project , which i choose to do with Starlite framework. The pain is horrible. I don't want to even touch writing another query , even if it is a simple CRUD .
>However, even if the docs are good the API is still the worst and least intuitive I've ever seen, it honestly feels like I'm writing raw sql code shaped like python code. But it is weird, non standard and difficult to follow, unlike the official python sql interface. I recently had to write an upset and it felt like I was trying to summon a forgotten demon. It would have honestly been much easier to just write it in raw sql.
At this point , only DjangoORM is fun , and even raw SQL is a lot more fun than SQLAlchemy .. I had looked into docs and it seems that not much is improved. API still have many ways to do a single thing , returning a bit different results .
I don't understand why Python Community embrace SQLAlchemy and touted it as best ORM in the world , while it breaks all the Zen of python. Even plain old SQL statements are closer to Philisophy of python.
pugsql looks a lot of fun but when i checked the last commit , it was almost a year ago..
Try to not recommend Tampermonkey, it is closed source, slow, bloated and a bit sketchy, uses Google Analytics and right now the privacy policy link in their website isn't working for me (redirects to the home page). There are better, faster, less bloated and open source alternatives, like Violentmonkey which now has a reasonable privacy policy (https://violentmonkey.github.io/privacy/).
Btw thanks for the script, the alternating colours between comments and replies should be part of the standard hacker news experience, thanks for adding that.
Completely agreed on avoid Tampermonkey, but Violentmonkey fails to detect the page on one of my scripts very often. If I refresh it works. This is a match with no wildcards and just a simple path after the URL.
Perhaps a simple bug, but doesn't give me much faith in the project.
That's a shame, never happened to me, have you tried changing it to an include or adding a wildcard at the end? The project has been active for a very long time, I've been using it since it was only an Opera 12 extension, the developer seems very responsive, try reporting a bug.
I haven't tried that yet but I'll give both a try, thank you. I'm not giving up on it yet. Good to know they're responsive, I might look at fixing it myself.
I've been running AdGuardHome for years, almost since it came out. I prefer keeping it up to date manually after reading the release notes and known issues (by visiting the config page and clicking update if it notifies me that there is a new version) but if you want to automate it you can install the Snap or Docker version or use the API[1] to trigger an auto upgrade. Installing it is easy, there's even an automated installation script now[2] but I prefer to do it manually and run it as a regular user without root permissions. It can run from pretty much anywhere in your filesystem and as any user as long as it has the correct permissions, that's what made me choose it instead of PiHole at first, also the more advanced regex blocking rules.
The easiest way to keep it up to date is probably a Cron script that runs curl to trigger the upgrade API endpoint.
What DNS adblockers like PiHole see is only a request for the domain (example.com), they can't see whether it is http or https nor can they see /ads.js or the rest of the path, port, query parameters, etc. They may not even see the second attempt at loading the domain (the /ads.js request) because the browser and the OS have probably cached the request.
uBlock Origin and other adblockers can see the whole request and modify it[1]. They can see whether it is http or https (that's how https everywhere knows what to redirect and where) and they can see whether you are loading /video or /ads.js and if they see /ads.js they can tell the browser to not load that.
[1] Google is going to remove the "modify" part of functionality in manifest v3 citing privacy and security concerns (while ironically keeping the "see" part) in an attempt to kill or limit the functionality of adblockers. Since YouTube uses an ever changing list of domains for serving videos and ads this change will effectively unblock ads in YouTube because adblockers can only keep a static list of what to block and when instead of doing it dynamically, and said static list can only be updated (as far as I know) by pushing a new version of the extension to the store, severely limiting the frequency of updates, making it impossible to keep up with the frequent changes in YouTube.
Yes, Mozilla is keeping the whole functionality. You can MITM yourself but then you'll be lowering your own security and your proxy will have to do its own certificate validation because the browser won't be able to do it anymore. You are also restricted to things you can modify with a simple regex (unless you add HTML parsing to your proxy but then you'll be double parsing, once in the proxy and another time in the browser). And it's still probably going to break with websites in the HSTS preload list. And content generated by JavaScript won't be blocked easily. It's also going to be very inefficient, don't underestimate the years of performance improvements behind adblocking extensions. Adblockers like uBlock Origin also do much more than just blocking requests. For example they can inject small snippets of JavaScript to neutralize tracking scripts without breaking websites that depend on them by introducing dummy functions with the same API as the tracking script or to counter anti adblocking scripts. It can also inject CSS snippets to fix website breakage. And block requests based on what website they originate from. And probably much more than you can easily do with a simple proxy.
I agree with this, I've been using Django for more than a couple of years. The built-in admin interface is a blessing, the ORM is good enough and unless you are handling very huge loads it doesn't matter if it doesn't always generate the most efficient queries (in more than 5 years I've never run into any issues with that). The almost but not exactly MVC pattern that also let's you mix both class and function based views coupled with a very limited template system is a curse. I would have preferred one way to do things(TM) but the Django approach let's you badly do a hundred and more different ways if you aren't very experienced with it (and even if you are you need a lot of discipline to do things properly, and even if you have discipline sometimes Django doesn't have a feature you need so you implement your own and then a major release comes and now it has its own incompatible way of doing the same thing). I still wouldn't change Django for anything else, I'm sure I wouldn't have a job anymore if it weren't for how fast and easily I can implement things with it (management is an absolute hell here and changes how everything is done every couple of months until management itself gets fired and replaced every year or two).
Any plans on porting it to more mainstream phones? It seems to be aiming for "first-world phones" at the moment which is nice since they are more open and you don't have to worry (too much) about proprietary blobs and firmware but leaves a big chunk of the world out.
PostmarketOS runs on more devices (and is a great Distribution!), but they also run a lot more devices through Halium (which is a compatibility layer).
Halium is interesting because it comes with the potential of building a pmOS Generic System Image (GSI). While this would be relying on vendor-shipped proprietary drivers and kernels, literally everything else could be Free and run on mostly any GSI-supported device with a full feature set.
Yep! My personal issue with Halium is the fact that now you have devices running around today running something like Linux 3.10, which I shutter at because of security concerns.
Whoops, my mistake! Unfortunately I cannot edit my comment to reflect that.
So then out of curiosity, how are you all able to port to so many devices? Do you have a bunch of volunteers, or all most of the devices not too difficult to port to pmOS?
So a few of the pmOS folks were kind enough to discuss the pmOS structure with me, so I will try to summerize here:
pmOS has really good tooling called pmbootstrap. They are able to use the downstream kernels directly (or a mainline kernel if the device supports it), and it generally isn't too difficult to get pmOS to boot with the downstream kernel. However, because of the downstream kernel, you may not get a lot of features, and will have to work to port the features to pmOS.
They also have device catagories, so if someone wants to see the status of a device, they can look here:
https://wiki.postmarketos.org/wiki/Devices which bins the devices for a user to quickly understand how well a device runs pmOS.
We welcome users, testers, and new developers! I actually started development with Mobian earlier this year. I personally would recommend getting a Pinephone for it.
The driver situation is super tough. Modern phones are full of proprietary shit and nobody is willing/able to reverse engineer all of this, and things like documentation and data sheets aren't really available.
Saying that JavaScript is single threaded isn't accurate anymore. Even the article we just read mentions that you can run multiple threads with WebWorkers. The problem with JavaScript is that the main thread may* block the UI.
* Why may? The older APIs such as document.write and synchronous XHR do but modern browsers already warn against that. And the modern APIs don't block the UI because they are asynchronous and work with callbacks or promises. Bad JavaScript code can make the UI sluggish though, people should be performing complex tasks on a WebWorker but that's not as easy or as obvious as the default of performing them on the UI thread. The same problems can happen in native Windows programming for the same reason, the UI thread being the main thread. This is a bad design decision from decades ago that will likely haunt us for many more decades in the desktop OS and on the web.
> Saying that JavaScript is single threaded isn't accurate anymore. Even the article we just read mentions that you can run multiple threads with WebWorkers
This doesn't make Javascript multithreaded. It means you run single-threaded programs in separate containers and pass messages between them.
If you mean multithreaded as running separate OS threads I've got to agree with you but the definition of thread isn't limited to just that. I don't know about the internals of web browsers or whether they use OS threads or green threads or a combination of both for web workers but they are threads and that's how the MDN calls them too: https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers...
I'm not sure I get your point. It's more or less what I said, it isn't using real OS threads but it is doing something similar to green threads. It isn't true multithreading in the sense that it doesn't spawn an OS thread but it spawns lightweight (or green or simulated) threads with their own VM.
> It's more or less what I said, it isn't using real OS threads but it is doing something similar to green threads.
It's not even green threads.
Also, literally on the page you linked:
=== start quote ===
The Worker interface spawns real OS-level threads,
=== end quote ===
[1]
> it doesn't spawn an OS thread but it spawns lightweight (or green or simulated) threads with their own VM.
It spawns an isolated process. Javascript as a language and its runtime cannot support threads. To do "threads" they basically initialise a new instance of JS runtime.
This is not "threading" by any definition. MDN page may call that for the sake of people who end up using it, but these are not:
It's an outside implementation, running processes inside the host, and letting processes communicate using memory-mapped values. 20 years ago no one in their right mind would call this "multithreading in language X". It was "app 1 written in any language is communicating with app 2 written in any language via memory-mapped files". Now people who've never seen anything outside web development call it multithreading.
[1] Fun trivia: original implementation literally used a runtime per worker: https://blog.mozilla.org/luke/2012/01/24/jsruntime-is-now-of... It still uses a CycleCollectedJSRuntime per worker, but I'm too lazy to dig through source code for further details.
> It spawns an isolated process. Javascript as a language and its runtime cannot support threads. To do "threads" they basically initialise a new instance of JS runtime.
Sometimes I need an answer as blunt and direct as that one to understand something, thanks.
Unfortunately, there's also a dearth of tech articles discussing how the many parts of web APIs work, and how they are implemented. High-level articles help with using them, but will always simplify things :)
To share memory using SharedArrayBuffer objects from one agent in the cluster to another (an agent is either the web page’s main program or one of its web workers), postMessage and structured cloning is used.
=== end quote ===
And, of course, they are almost exclusively used with Web Workers because it makes zero sense to use them in the context of a single page. They are basically memory-mapped files, but for the browser, and their presence doesn't make Javascript and its runtime multithreaded.
IIRC Javascript can't even be made multithreaded because there are places in the spec that can't work in a multithreaded environment, but don't quote me on that.
> However, the shared data block referenced by the two SharedArrayBuffer objects is the same data block, and a side effect to the block in one agent will eventually become visible in the other agent.
This gives you shared memory between two threads. Sure the entire address space isn't shared but I find it hard to deny that this is threading.
> This gives you shared memory between two threads.
This is the key point you're missing. It's not "two threads". It's to different isolated tasks/processes.
Shared array buffers are quite literally what memory-mapped files have been used for over 40 years [1]
=== start quote ===
Another common use for memory-mapped files is to share memory between multiple processes. In modern protected mode operating systems, processes are generally not permitted to access memory space that is allocated for use by another process... There are a number of techniques available to safely share memory, and memory-mapped file I/O is one of the most popular. Two or more applications can simultaneously map a single physical file into memory and access this memory.
=== end quote ===
That's all there is: two separate, isolated processes accessing the same memory. This doesn't make Javascript multithreaded in any way, shape, or form. Browsers give you a rather awkward way to run a separate tasks in Javascript and give you message-passing and shared memory as a way to communicate between them.
Had browsers been able to run other languages than Javascript, you would be able to run one worker in Javascript, and another in SadlyNonExistentScript, and literally nothing would change: you would still have the same postMessage and SharedArrayBuffer as APIs provided by the host.
To run workers isolated from each other when Javascript doesn’t support multithreading, worker threads use a special mechanism.
We all know Node runs on top of Chrome’s V8 engine. V8 supports the creation of isolated V8 runtimes. These isolated instances, known as V8 Isolate , have their own Javascript heaps and micro-task queues.
Worker threads are run on these isolated V8 engines, each worker having its own V8 engine and event queue. In other words, when workers are active, a Node application has multiple Node instances running in the same process.
This particular macroassembler is for .NET CIL (https://en.wikipedia.org/wiki/Common_Intermediate_Language), normal CPU assembly seems to be out of the scope for this project.