You should be able to make it think you have another card:
export HSA_OVERRIDE_GFX_VERSION=10.3.0
The possible values are said to be:
# gfx1030 = "10.3.0"
# gfx900 = "9.0.0"
# gfx906 = "9.0.6"
# gfx908 = "9.0.8"
# gfx90a = "9.0.a"
Telling ROCm to pretend that your RDNA 3 GPU (gfx1102) is an RDNA 2 GPU (gfx1030) is not going to work. The ISAs are not backwards-compatible like that. You might get away with pretending your gfx1102 GPU is a gfx1100 GPU, but even that depends on the code that you're loading not using any gfx1100-specific features. I would generally recommend against using this override at all for RDNA 3 as those ISAs are all slightly different.
In any case, the possible values can be found in the LLVM documentation [1]. I would recommend looking closely at the notes for the generic ISAs, as they highlight the differences between the ISAs (which is important when you're loading code built for one ISA onto a GPU that implements a different ISA).
Recompressing an already lossily compressed file is almost guaranteed to produce information loss, whereas storage media is getting cheaper and cheaper over time. An 18TB hard disk is now within the budget of many people, and they're likely to get cheaper still.
So if your purpose is to archive these files because they're worth keeping, buying a bigger disk may make even more sense.
I don't consider hardisk now although I have tons of them. I keep multiple copies of those files but it is a pain in the ass to distribute the same backup to different disks simply because their R/W rate is too slow. After transfering files, I run validation program to make sure they are all right. These processes take me a week or so. And I have to do this regularly to ensure errors do not accumulate through time. Therefore now I want SSD but the price is still 4X of HDD per TB.
Slight degradation in quality is not my concern, since ultimately I use realtime upscaling tools to watch them. But I don't know how exactly H.265 affects the quality of a video.
By making the file smaller, I can
1) distribute to other disk faster, 2) validate correctness faster, 3) set a higher redundant rate because now I have more free space.
But the problem is will H.265 become obsolete before it becomes infrastructure. You know AV1 is a better algorithm and companies are pushing it.
Or H.265 is not available in the future due to I don't know royalty issue or something like that?
The "Open Location Code" is often mentioned on Hacker News, but is sadly neither open, nor a location code.
To pick one example, if you go to 0°06'40.6"S 28°56'27.0"E
(-0.111271, 28.940829) in Google Maps, it'll give the Open Location Code "VWQR+F8W Maipi, Democratic Republic of the Congo", or some variation thereof, depending on your local language.
The most significant bytes, "Maipi, Democratic Republic of the Congo", are obviously not a location code, but a place name, and thus cannot be decoded at all.
Moreover, if you go to OpenStreetMap and look up "Maipi", it returns three places in Indonesia, and none in DR Congo. So even using a location service plus the algorithm could land you on the wrong continent.
The "Open Location Code" is essentially only usable as a search key for Google Maps. "Go look it up on Google" isn't a location code, it's advertising.
Binary search and similar forms of successive approximation. It can be used to solve such a wide array of problems given just a minimal amount of information.
That sounds more like what a proprietary licence would be used for.
You could license both the binaries and the source code under this proprietary licence and provide them to users.
In some specializations of programming, you're going to need a lot of those things. For instance, working with game engines, scientific simulations, image or signal processing, finance, or simply making the base software and libraries that other people use, can involve a lot of CS.
In larger corporations, the programming is often much higher level, and consists more of stringing together libraries and frameworks and entire systems so that they fulfill a business purpose. Even simple programs can take hundreds of megabytes of memory and have tens or hundreds of dependencies beyond anyone's control.
If you want to keep practicing your algorithm skills, you might try something like https://projecteuler.net/ , which is very mathy, or https://checkio.org/ , which is a bit more user-friendly, and get some practice there. As for OS theory, there are always open-source operating systems one can contribute to, though I suspect many of them would consume a lot of a person's time.
I was surprised to see database normalization in OP's list. Working at a modest-size company, database normalization was something I dealt with all the time. It was up to the devs to do table design, if you didn't normalize you'd end up with a mess.
I've never seen someone even mention the word normalization when working with databases, but the knowledge is useful to intuitively design more sensible tables. Perhaps many people don't realize they are doing it?
I know I didn't until I took a database class in grad school. (I'd worked with databases before and was taught them as a kid by my dad, but I needed some kind of 'proof' for CV purposes). I didn't call it 'normalization', I just considered it to be part of future-proofing/not wanting to be bothered if new uses showed up.
If you use browser plugins, it's still quite a nice experience.
* uBlock Origin (kills the ads)
* SponsorBlock (uses crowdsourcing to skip the sponsored, intros, self promotion, credits, etc. This cuts 10% to 20% from most videos, saving a lot of time)
* Channel Blocker (gives you a button to permanently block all videos from a channel)
* BlockTube (lets you block videos based on keywords in video titles)
* Unhook (removes recommended and suggested videos)
* Better Subscriptions for YouTube (lets you hide videos you've already watched from your subscriptions page and other pages)
Another popular YouTube plugin is 'Enhancer for YouTube', but personally I don't find this one very useful.