I'm running k3s at home on single node with local storage. Few blogs, forum, minIO.
Very easy, reliable.
Without k3s I would have use Docker, but k3s really adds important features: easier to manage network, more declarative configuration, bundled Traefik...
So, I'm convinced that quite a few people can happily and efficiently use k8s.
In the past I used other k8s distro (Harvester) which was much more complicated to use and fragile to maintain.
3. Diode ring, which provides variable gain, used in analog compressors like the Neve 33609 (I have a clone of the 33609, and I’m very fond of it)
Think about this: if you have a nonlinear device like a diode, then the dynamic resistance changes depending on the operating point. If you modulate the operating point, you’re modulating the dynamic resistance.
I'm rooting against Kotlin since it appears to be only usable with the JetBrains ide. I'm totally blind and Jetbrains tools are not nearly as accessible or easy to use as VS Code with all the Java extensions in my experience. At all the jobs I've had no one cared if I didn't use Idea, but considering it looks like there's no good VS Code tooling for Kotlin if I have to use Kotlin professionally it's going to be painful.
One of the best books I’ve ever read is The Making of the Atomic Bomb
Book by Richard Rhodes. If you want an extremely in-depth history of the science and people behind Manhattan project, I would highly recommend reading it.
Why don't you look at Topton's N100 boards with 6x SATA, 2.5Gb LAN, PCIe slot for extra SATA ports and Jonsbo N3 NAS case with it? For $300 you'd have a way better NAS than anything Synology offers.
Kafka isn’t a queue, it’s a distributed log. A partitioned topic can take very large volumes of message writes, persist them indefinitely, deliver them to any subscriber in-order and at-least-once (even for subscribers added after the message was published), and do all of that distributed and HA.
If you need all those things, there just are not a lot of options.
I implemented those recommendations in Caddy to enable a "trusted proxies" system which informs the proxy, logging, request matching etc to safely use the client IP taken from proxy headers.
There's a long and winding thread called "Best of Ali-Xpress" [sic] on WatchUSeek that has a bunch of ideas. Outside of that I'd recommend searching either by movement name ("NH35", "PT5000") or popular watch model diameter ("40mm" for a Sub, "41mm" for an SMP) and sorting by best selling. From there you can go to the storefronts and see all of their models.
I'm laughing because I clicked your link thinking I agreed and had posted similar things and it's my comment.
Still on k3s, still love it.
My cluster is currently hosting 94 pods across 55 deployments. Using 500m cpu (half a core) average, spiking to 3cores under moderate load, and 25gb ram. Biggest ram hog is Jellyfin (which appears to have a slow leak, and gets restarted when it hits 16gb, although it's currently streaming to 5 family members).
The cluster is exclusively recycled old hardware (4 machines), mostly old gaming machines. The most recent is 5 years old, the oldest is nearing 15 years old.
The nodes are bare Arch linux installs - which are wonderfully slim, easy to configure, and light on resources.
It burns 450Watts on average, which is higher than I'd like, but mostly because I have jellyfin and whisper/willow (self hosted home automation via voice control) as GPU accelerated loads - so I'm running an old nvidia 1060 and 2080.
Everything is plain old yaml, I explicitly avoid absolutely anything more complicated (including things like helm and kustomize - with very few exceptions) and it's... wonderful.
It's by far the least amount of "dev-ops" I've had to do for self hosting. Things work, it's simple, spinning up new service is a new folder and 3 new yaml files (0-namespace.yaml, 1-deployment.yaml, 2-ingress.yaml) which are just copied and edited each time.
Any three machines can go down and the cluster stays up (metalLB is really, really cool - ARP/NDP announcements mean any machine can announce as the primary load balancer and take the configured IP). Sometimes services take a minute to reallocate (and jellyfin gets priority over willow if I lose a gpu, and can also deploy with cpu-only transcoding as a fallback), and I haven't tried to be clever getting 100% uptime because I mostly don't care. If I'm down for 3 minutes, it's not the end of the world. I have a couple of commercial services in there, but it's free hosting for family businesses, they can also afford to be down an hour or two a year.
Overall - I'm not going back. It's great. Strongly, STRONGLY recommend k3s over microk8s. Definitely don't want to go back to single machine wrangling. The learning curve is steeper for this... but man do I spend very little time thinking about it at this point.
I've streamed video from it as far away as literally the other side of the world (GA, USA -> Taiwan). Amazon/Google/Microsoft have everyone convinced you can't host things yourself. Even for tiny projects people default to VPS's on a cloud. It's a ripoff. Put an old laptop in your basement - faster machine for free. At GCP prices... I have 30k/year worth of cloud compute in my basement, because GCP is a god damned rip off. My costs are $32/month in power, and a network connection I already have to have, and it's replaced hundreds of dollars/month in subscription costs.
For personal use-cases... basement cloud is where it's at.
I saw this put really, really well not too long ago:
> A lot of us got the message earlier in life that we had to wait for other's permission or encouragement to do things, when in fact, all you need is the ability to understand the situation and deal with the consequences
He has done this move before with Tesla buying Solar City. When you do a deal with yourself you can assign any value you want to assets, it isn’t a competitive process. In the previous case Solar City was dying but its acquisition by Tesla was pitched as a great synergy.
Back in 2010 when we were building Amazon Route 53, we had a really big problem to solve. DDOS attacks. DNS is critical, and it uses UDP, which is a protocol that allows attackers to spoof their source IP address. We knew that DNS services are a common target for attacks from botnets; and our research at the time showed that our established competitors used large and expensive "packet scrubbers" to handle this.
We budgeted out what we think it would cost to handle our scale and the price tag came to tens of millions of dollars. You might think that would be no problem for a big company like Amazon, but our total infrastructure budget for Route 53 was something like tens of thousands of dollars. At the edge, we were re-using CloudFront servers that had failed hard drives for our name servers; since we wouldn't need much storage, and our API servers were pretty modest. We had a team of about ~6 people. That's what "scrappy" looks like at AWS; spend nothing, minimize downside risk, get things done quickly. There was no way I was going to ask for tens of millions of dollars for packet scrubbers. Besides, they would take too long to arrive, and would make us too reliant on a vendor.
Early on we had decided to run Route 53 name servers on its own dedicated IP range to give some measure of isolation. We could use dedicated network links to make sure that Amazon's other infrastructure wouldn't be impacted. But that wouldn't help Route 53's customers from sharing fate with each other. We didn't have a real plan beyond "When it happens, get really good filtering using our existing network and system tools".
Early that summer, I was reading one of Knuth's recent fascicles for 4A and was swimming in combinatorial algorithms. One night it just "clicked" that by creating many virtual name servers, we could easily assign every customer to a unique combination of four of those virtual name servers. We could even control the amount of overlap; some quick math showed that we about two thousand name servers, we could guarantee that no two customer would share more than two name servers. That number is important because our experiments showed that domains resolve just fine even when two name servers are unreachable, but beyond that it starts to be a problem.
The recursive search algorithm to assign the IPs was inspired directly by the algorithms in 4A; it gives customer domains two more independent dimensions of isolation. They also get 4 name servers from 4 independent "stripes", which correspond to the different TLDs we use for the name server names (co.uk, com, net, org). This guarantees that if one of those TLDs has an issue (like a DNSSEC mistake), only one of the name servers is impacted. They also come from 4 independent "braids", which can be used to ensure that no two name servers share certain network paths or physical hardware. I just wouldn't have known how to do any of this without reading 4A. And I even have a background in combinatorials; from statistics and cryptography.
I've never been more excited by a solution; this approach gave us provable network IP level isolation between customer domains while costing basically nothing in real infrastructure. It's math. It wasn't completely free; we had to use 2,000 anycast IP addresses, and it turns out that we also had to register 512 domains for them because of how many TLDs require name servers to be registered and to have glue records; so that was a fun process working with our registrar. But we got it done.
I named the approach "Shuffle Sharding", and it's more discovery than invention. Many multi-tenant systems that use some kind of random placement get a kind of shuffle sharding, and network filtering techniques like Stochastic Fair Blue use time-seeded hashing to similar effect. But I've never seen anything quite the same, or with the level of control that we could apply; I could even extend it to a kind of recursive nested shuffle shading that isolates at even more levels. For example if you want to isolate not just a caller, but a caller's callers when they are in some kind of "on behalf of" call pattern.
Years later, I made a personal pilgrimage of gratitude to see a Knuth Christmas lecture in person, and sat in the front row. I still read every scrap of material that Knuth puts out (including the Organ pieces!) because I never know what it might inspire. All of this to say ... I do think his volumes are surprisingly practical for programmers; they broaden your mind as well as deepen your understanding. What more could you want.
It has optional cryptographic signatures of the navigation message, i.e. the data indicating position of satellites.
Spoofing generally works not by altering the navigation message, but by altering the timing of arriving signals. I'd recommend this video for a publicly-available overview of the techniques: https://www.youtube.com/watch?v=sAjWJbZOq6I
There’s a great scene in the movie World’s Greatest Dad where Robin Williams plays a frustrated writer. Another teacher at the school where he teaches gets a story in The New Yorker¹ and Robin Williams’s character tells him something along the lines of “how nice, I hope your next one gets published somewhere that isn’t regional.”²
⸻
1. This is generally considered the pinnacle of literary short fiction publishing.
2. He was, of course, being ironic (and bitterly jealous). As an aside, the movie is a brilliant dark comedy, written and directed by Bobcat Goldthwait who’s come a long way from his Police Academy days.
One travel technique that has worked very well for me takes place the day before my trip: using a pre-travel prep-and-packing checklist. I created this checklist about 15 years ago and still refine it occasionally. This list has three sections:
A) Preparation tasks: Like printing essential travel documents, saving a backup to my mobile phone, buying foreign currency, activating data roaming, etc.
B) Packing list: Mine currently has about 30 or so items, covering everything from the very basics, like toothbrush and toothpaste, to the often-overlooked, like reusable ziplock bags, microfibre cloths, etc.
C) Last minute checks: These are final tasks to complete just before leaving home. This includes double-checking that passports are packed, non-essential electrical appliances and lamps are switched off, balcony doors are locked, wet waste has been properly disposed of, etc.
By the time I step into a taxi or train to the airport, I can fully focus on the journey ahead rather than worrying about forgotten items. After all, this checklist has served me well for the past 15 years. Every item is checked off before I leave home, so as soon as I get into a taxi or train, I can relax, knowing that nothing has been forgotten.
For anyone interested in how mucrophones can sound different, check out Jim Lill's video [1] where he A/B tests a bunch of mics against one another and industry standards.
He has a whole series of videos where he explores what contributes the most to "guitar tone", all the way from the strings to your ears and in between. It's a bit of an eye opener to say the least. Highly, highly recommended.
This will strip ALL exif metadata, change the quality, shave 10 pixels off each edge just because, resize to xx%, attenuate, and adds noise of type "Uniform".
Some additional notes:
- attenuate needs to come before the +noise switch in the command line
- the worse the jpeg quality figure, the harder it is to detect image modifications[1]
- resize percentage can be a real number - so 91.5% or 92.1% ...
So, AI image detection notwithstanding, you can not only remove metadata but also make each image you publish different from one another - and certainly very different than the original picture you took.
Just finished my own overthinking of recipe structures.
I figure that a recipe is more or less an upside-down tree!
Where you start with a list of all the nodes (ingredients)
Have a n:1 relationship with the next series of nodes (steps)
until you finish at a single node (the dish you're trying to make)
So instead of having a separate chunk of "here's my ingredients" and "let me repeat the ingredients and one by one instruction until the end" I figure you can display the upside-down tree to convey more information with less words.
I recently finished a large World War II project that covered the full timeline of the war, and Google Maps was a valuable tool to follow what was happening in any given battle. The problem is Google Maps has more detail than you need, so trying to follow something like Operation Market Garden is much more difficult than just looking at this beautiful battle map: https://www.alamy.com/a-bridge-too-far-image68088140.html. "The West Point Atlas of War" is another great resource.
Maps cover the spatial side of war, but in addition it's difficult to follow the timeline. My project stitched popular World War II movies together into a chronological series, making it easier to see what was happening across the world at any given time. You can view the episodes and the full blog post here: https://open.substack.com/pub/ww2supercut/p/combining-143-wo.... And in addition "The Second World War" by Churchill's biographer Martin Gilbert, is a chronological, 750 page book that I couldn't put down.
I know of a even more impressive website that will transfer playlists from Spotify (or 20 other platforms, including text files) to 20 other platforms or a text file. I will share the link, but don't hug it to death y'all. :)
A decent, short book on the historical story behind H4 (and the rest of Mr. Harrison’s time-keepers) is “Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time” by Dava Sobel. It goes into the longitude competition, the people involved, and how Harrison was able to (eventually) win with his timekeepers.
Practically all hobbyists and phone repairers use the same style of generic stereo zoom microscope, loosely based on the Meiji EMZ-5. AmScope will sell you one starting at around $400, or you can buy direct from China if you want to save a few bucks. Less expensive models with fixed magnification are available, but I can't recommend them.
With 10x eyepieces and a 0.5x auxiliary objective, these scopes provide a very useful range of 3.5x-22.5x magnification and a comfortable working distance. At the minimum 3.5x magnification, the standard widefield 10x eyepieces give a field of view of about 50mm.
They are available in various bundles with a wide variety of stands and accessories; the essential accessories are a ring light and a 0.5x Barlow lens. I would recommend the biggest, heaviest boom stand you can reasonably fit on your desk, because any instability in the stand will be greatly magnified in your vision.
The key to using these microscopes successfully is to adjust the parfocal, which will allow you to adjust the zoom without having to refocus.
The preferred industrial option is the Vision Engineering Mantis, which uses very clever projection technology to provide a stereoscopic image without eyepieces. The ergonomics are dramatically better than a conventional stereo microscope, but you'll be lucky to find a used model on eBay for less than $1000. A big investment for a hobbyist, but worth every penny if you've got back or neck problems.
I love my remarkable 2. Bought it before "Connect" was a thing, so I don't have a subscription. But I cannot recommend it to anyone. There are better alternatives out there and MyDeepGuide (youtube) has reviewed them all better than I ever could.
The software is moving too slowly and often in a wrong direction. Especially since they released the keyboard folio most updates were around typing (which is supar on any eink device)... and they generally made my experience as a pen user worse.
I don't care if the new hardware is awesome, whenever mine breaks I will switch to a competitor.