Thank you for your reply. I wasn't looking for an op sec answer but more of a technical concern around design of the product - i.e why didn't they think of it, or if they did what made them not solve the problem with other tech solutions (dummy prints, zero out, patterns, or even a thermal or physical recycle program). At least communicate this is possible like they addressed the digital safety side of it.
I don't believe it would take too much lift to automate this with a scanner+some software.
Thank you. I wonder if they considered a dummy cycle on a relevant media type after the print to zero out what's left over. Specially if some other patterns could be introduced to randomize what's left.
Ken is a good friend in the industry and always has the best interest of email security at heart. This may have been an architectural oversight but they are not wrong that SPF is surely a cause for concern, as is misconfigured DNS based trust and recertifications via arc (which was supposed to solve a problem for forwarding scenarios).
The centralization of email services to a handful of providers basically has led to multihoming of millions of domains that open SPF auth to the same handful. Any integrations by them or changes to existing stack can cause issues to pop up, because delegation of sending rights isn't strictly auth controlled. The same also happens with dkim delegation to saas providers who share backend keys across other customers of theirs and if their API is open to experiment (or an account gets popped) then the customer domains are possibly at risk.
Email is hard to do right. No auth no entry should be the default. But majority of domain owners aren't very good at figuring out how to secure things, or have business/product interests that are a priority, specially when delegated and authorized to third party senders on their behalf.
I had the privilege of working on some of this during the UUnet days. The pride I felt in "keeping the internet up" as a backbone while no-one in public recognized where I worked was admittedly amusing. Even in career interviews later, including today, I rarely meet anyone that knows or appreciates that exposure. Those formative years being exposed to networking and services in general from such amazing peers are extremely fond and foundational memories of mine.
FWIW although this article says many had oc48s by early 90s I can tell you that wasn't a thing till much later. The fact that I have a FTTH node at home that's faster than that today a severe fraction of the cost, and consumer 2.5g switches below $200 just blows my mind. The planning required during migrations alone for all failivers required calculations back then and I can unplug a cat6 today without sweating
Author here. Agree based on looking at other sources, my dating on OC-48 deployment seems to be off by ~a decade (it would be more like late 1990s/early 2000s, e.g. [0] says 1999 for a very early use). I sourced this from Fred Goldstein, The Great Telecom Meltdown, but I don't have the book handy to double-check whether I misread something. It may be confusion over just where in the network OC-48 was being used.
FWIW, OC48 was commonplace in telco rings when I got into the business in 1996, we installed a crapload of Fujitsu FLM-2400 hardware, but mostly as transport for DS3's and the occasional STS-3 or STS-12. We certainly didn't have customers ordering whole OC48's for themselves!
But perhaps the article is blurring the notion and counting it as an OC48 if that's what was entering the building, even if a given customer only leased an OC-3 worth of it and the rest just passed through. That was certainly common, and some customers themselves were confused about the notion, so I could see the details being handwaved here as well.
As another data point, I worked on what I'm told was the first OC192 ring in Michigan, in what I believe was 1999, which included a brand new building that MCI had just constructed behind the old train station. (Fiber often follows railroads since they're straight-line easements connecting population centers.) That was the first time I encountered the notion of polarization mode dispersion compensation, which had been implemented as textbook-sized modules that slotted into the node. I was told that each module contained a bunch of fiber and some micro-actuators to stress it and try to warp the fiber into applying a complementary amount of PMD to what had happened in the OSP, restoring the pulse shape.
The Nortel TransportNode OC-192 hardware itself occupied a full rack just to break the OC192 down into OC48s, and then there were two more racks of OC48 hardware (each a half-rack) to handle customer-rate circuits. (A year or two later, when the Cerent 454 condensed a full OC48 terminal into an 8U or 6U chassis, everything changed. Cisco rapidly gobbled them up and stuck "15454" labels on them, and the rest is history.)
It likewise blows my mind that what was a whole rack of equipment then, today fits in an SFP+ that I can conceal in my palm, and a fraction of an ASIC to drive it.
Thanks. It's refreshing to see content on freebsd on YouTube from someone like this YouTuber. I started off on freebsd many moons ago ( I think freebsd 3.0 was the big upgrade with SMP on dual Celeron machines with some Taiwanese motherboard ) and have always had a fond spot for it. Spent sooooo many hours trying to rebuild and recompile library dependencies and kernel tweaks that it was almost an expectation. This was all alongside some Slackware tinkering.
Then used openbsd at work. And Solaris/redhat etc.
Until Ubuntu.
It's just so darn easier for most things. And it just works. Debian++.
Still like installing freebsd once in a while on a VM to tinker on, and some services are probably still best run on it comparing to others.
But using freebsd as a desktop? .. not sure I'd go that far.
Remembering back to the uucp days and when certain OOB nodes came back up after outages, this is precisely why email as a standard can handle delayed delivery and MX records have preferences for relays that can queue for future delivery.
You can always sign based on specific rules via your MTA to sign some and not sign others. You can also send from many sender's (say allocated by spf allow rules) that aren't signed at all but matches the IP ranges allowed.
Have been using this for a few years and the most amazing usecase is being able to switch between multiple chrome windows, just like windows used to let me do "same app, different window instances", with a preview!
The other thing that works well with this for a life upgrade on a mac is finicky (GitHub.com/johnste/finicky )
Newpipe can also download. No need for bloated YouTube-App, Premium subscription or ti hassle with a command line tool (in case CLI's are not your thing).
As also mentioned by others, youtube-dl seems kinda dead. However there is a good fork/successor called "yt-dlp" which, in addition to other nice improvements, also somehow manages to work around the enforced heavy bandwidth limit by YT.
It's just a command-line tool to download audio/video from youtube (and many other sites). You'd need to setup the iPad thing yourself, maybe using something like Plex?
Plex has shifted over time to present their content more prominently. It got to the point that your own media isn't even displayed on the default landing screen. Had to re-teach the kids how to find their movies and whatnot.
I haven't had it running in about a year though, couldn't be bothered after a move.
Jellyfin looks nice. I'll have to give it a go. I am dreading having to set everything up again if I move off of plex, rather than just grab my docker-compose file and get going.
In your comment, is allowlist a list of videos only they can watch? (not a list of 'age ranges' or 'channels', but actually being able to select individual videos). I was never able to get something like this out of youtube kids when I've tried it in the past.
All these services curate for kids and I want to choose what my kids watch, so they are all failing me. (I'm with the others who go with YT premium, youtube-dl, kids watch with vlc or something on a tablet).
I don't believe it would take too much lift to automate this with a scanner+some software.