>So what should I expect about the integrity of justice here?
You could maybe ask the other 26 nations how they might see things?
Remember the EU is not just 1 lady who had a bad daddy. And her tenure is not forever.
The same is also true of the US, every 4 years the whole country gets to vote for either the worst of their instincts, or for the person that embodies a higher ideal. So a lot can change over the next few decades.
I'm with C. Hitchens, a short term pessimist, but a long term optimist :)
Coming out of the 1st and then 2nd World wars the US seemed to be the voice of reason and alongside Britain (and possibly France) had a strong desire to make the world a better place and not repeat the mistakes of the past - which were all around to see.
But I feel after the 1960's the US has slowly lost that ideal, and the visionary leaders (Woodrow Wilson, Roosevelt or Eisenhower) are no longer anywhere to be found (whether the electorate don't tolerate or desire them I'm not sure).
I think you've got an overly rosy view of pre-1960's history. Post WWI saw all three of those powers continue their colonialist land grabs, and post WWII saw all three commit many atrocities in the effort to check Russian advances.
Both the Japanese and Korean antidemocratic moves took place in that pre 1960 era for example.
Oh I used to love leaky HANDLE hunts. These things could bring windows 3.n (and maybe NT 4.v too?) to its knees pretty quickly so I was always really careful about diligently freeing them.
And, like many commenters below constantly surprised that so many apps still ship with these problems, since it's so easy to spot them. Windows perfmon also gives you a nice graph, so you can correlate e.g. GUI behaviour with a jump in handles being created that then never get freed.
Whilst it's easy to spot the leak, the fact that you can leak handles by entirely forgetting they exist can make it a little harder for the programmers to find where their code leaked the handle and so fix it.
If you leak 6GB of RAM, there's 6GB of evidence about what was leaked exactly. If it's full of terrible love poetry you can rule out "FootBallScores" and focus on the "TeenagePoems" data structure and related code. But if you leak 50 000 HANDLEs then er... oops?
The post shows you can narrow it down to say, Event HANDLES but after that it gets increasingly sticky. Hopefully somewhere there's a C++ object that owns the handle and that has leaked which you can trace, but as I understand it, the handles themselves might be all that leaked, leaving you to instrument software so that you can find out which code made all the handles and then trace back those that seem leaked.
One good rule I used to use and make everyone in my team use was when you get a handle write the 'anti' call at the same time. Good portions of the older windows API is allocate functions and destroy functions. If you write both at the same time your mental overhead is less.
many start using 'auto managed' style languages (c++, java, etc) where the life cycle is not as clear. The life cycle is the same though but you own it in an indirect way. Who makes it. Who uses it. Who destroys it. In some languages that is easier to do, others you own it front to back. When doing this I try to start with create/destroy then usage. It is a style that helps remove leaks before they happen. They can still slip in there...
I have used the common string thing a few times to help narrow leaks (does not work in all cases :( ). You also can use tools like valgrind, boundchecker, purify, etc. Think MS has a couple that I can not remember off the top of my head.
Historically there were two kinds of leak. Your program might grow more than intended, but eventually give everything back when it finished - or it might seize some resources permanently by mistake, so that you need to restart the computer to "fix" it.
Modern operating systems mostly rule out the latter type of leak, you could leak files I guess, and of course Cloud users could leak things like S3 objects, even whole instances, but many resources are now automatically cleaned up when you exit.
As a result of that though, for long-lived processes, such as Chrome but also most background tasks and server software, just "I definitely clean up the mess eventually" doesn't get the job done, the OS was going to do that too. The user doesn't care whether the resources would have been returned half a second after they closed your program when "clean_up_everything()" is called by the main thread, or, a second after that when the OS cleans everything left behind.
So this is a real problem, about the actual meaning of our programs, and (though they are still a good idea) can't be helped by good programming techniques, garbage collection, Rust's Drop trait, the C++ RAII way of thinking, deferred clean-up in languages like Zig or Python, or anything else I'm aware of.
We need to actually express in our programs the intent to hang on to only what's actually needed and clean everything else up as we go. And it can be sorely tempting to consider that "It gets cleaned up eventually" is good enough, that's where leaks get in.
Spot on. But as for 'right now' I have to be a bit more practical and work with what I got.
Mostly these days most machines have a decent amount memory so leaks are not as noticeable, unless you look. If in the early days if I leaked 50MB of memory and my machine had 16MB. I had a real issue and the machine would be borked. If I do the same today you would not notice it.
It is why I stressed watching life cycle of an object. You made this thing, who is cleaning this mess up, and when. 'When' could be anywhere from never 'I need this all the time' to bunch it up when idle/reuse (garbage collection style), or 'right now' I need this memory back right now. There are trade offs and you need to watch for that too. The write it down while you are thinking of it has served me very well over the years. I personally got bit by not following my own rule a few weeks ago. I allocated something and I had not cleaned up correctly. I got 'lucky' and that was actually the right thing to do. But in the code review I rightfully got dinged on it.
One thing I wish many more docs would do is 'this makes object xyz use remove_xyz to clean it up'. Or 'this looks like it is creating an object it is not, this is returning some global'. Right there in the doc. It would help so much.
That temptation of 'eventually' is one that some languages push hard. But I find many leaks that I have chased over the years were just a poor understanding of the calls being used (bad docs, not reading them, or a combo). You may have had a different experience.
> can't be helped by good programming techniques, garbage collection, Rust's Drop trait, the C++ RAII way of thinking, deferred clean-up in languages like Zig or Python, or anything else I'm aware of.
Arena allocation. Every allocation must be attributed to some arena (eg current tab, current network request, current frame being rendered, etc); when the arena goes away, so do all its allocations.
The managed languages create additional hurdles for handle leaks. My prior experience with the .net GC is that it responds to memory pressure, and not necessarily to open handles. So when people write code that relies on GC for cleanup, it has a bit of a blind spot for the handles -- they don't take much memory in your process so it won't know to invoke GC.
I always place the taskbar at the top of the screen, and make sure that Outlook is the first icon in the taskbar, on the left (dragging & dropping it if I have to), so it's always in the same place.
Would not be happy if that no longer became possible.
>people were struggling just to get Smalltalk to perform at reasonable speeds on microcomputer class hardware
I've heard this before, but it confuses me as apparently the majority of the systems at Xerox Parc (Window manager, word processor, etc. ) were written in Smalltalk. So how could these sophisticated GUI apps perform there but not on other platforms, was the hardware really so different?
Yes, they were drastically different. Xerox built their own systems. Look up the Dolphin and Alto, they were basically minicomputer-class systems with custom CPUs built out of 74 series and bitslice components, no single-chip microprocessor. And very expensive.
It wasn't until the mid to late 80s that microprocessor systems built around the 68k series were able to run Smalltalk 80 well. And that was with Tektronix and others spending thousands of man hours developing techniques to implement the VM well. And many of those systems were still basically dedicated to Smalltalk and cost a fortune.
The simple answer to this question is that not all software at Xerox PARC was written in Smalltalk. In fact, Smalltalk was just one out of a few different languages and programming environments that Xerox PARC used. For example, the Bravo word processor was written in BCPL (https://en.wikipedia.org/wiki/Bravo_(software)). Mesa was also developed at Xerox PARC and was commonly used as an implementation language. Interlisp was also largely developed at Xerox PARC.
This is the first I've ever heard of this, I normally give my kids Weleda cough mixture and always pretty much saw the whole movement as basically a slightly better commercialized version of homeopathy.
Not going to mention any of this to my wife though - it'll just cause fights :)
You could maybe ask the other 26 nations how they might see things?
Remember the EU is not just 1 lady who had a bad daddy. And her tenure is not forever.
The same is also true of the US, every 4 years the whole country gets to vote for either the worst of their instincts, or for the person that embodies a higher ideal. So a lot can change over the next few decades.
I'm with C. Hitchens, a short term pessimist, but a long term optimist :)