Hacker Newsnew | past | comments | ask | show | jobs | submit | Nathan2055's commentslogin

That's really surprising to me.

iOS has had pretty decent audio format support for a few years now: even though you can't directly import FLAC files to iTunes/Music, they are supported in the OS itself since 2017 and play fine both in Files and in Safari. The other big mainstream formats (WAV, AIFF, MP3, AAC, and ALAC) have been supported for years, and even Opus finally got picked up in 2021.

About the only non-niche audio format that isn't supported natively on Apple platforms at this point is Vorbis, which was fully superseded by Opus well over a decade ago. Even then, I believe it's possible to get Vorbis support in iOS apps using various media libraries, although I'm sure Apple frowns upon it.

I'd really love to know what's causing that incompatibility.


The issue is OGG with Opus (for me) which was used by WhatsApp but isn't natively supported under iOS.

https://github.com/signalapp/Signal-iOS/issues/4539


This has been the advantage, and the drawback, of Signal's security model from the start.

Everything on Signal (at least the "original" design from a few years ago, this has started to be adjusted with the introduction of usernames and now backups and eventually syncing) is end-to-end encrypted between users, with your original phone acting as the primary communication node doing the encryption. Any other devices like desktops and tablets that get added are replicating from the original node rather than receiving new messages straight from the network.

This offers substantial privacy and security guarantees, at the cost of convenience and portability. It can be contrasted with something like iMessage, before Messages in iCloud was implemented, where every registered device is a full node that receives every new message directly, as long as they're connected at the time that it's sent.

Today's addition brings Signal to where iMessage was originally: each device is backing up their own messages, but those backups aren't syncing with one another. Based on the blog post, the goal is to eventually get Signal to where iMessage is today now that Messages in iCloud is available: all of the devices sync their own message databases with a version in the cloud, which is also end-to-end encrypted with the same guarantees as the messages themselves, but which ensures that every device ends up with the same message history regardless of whether they're connected to receive all of the messages as they come in. Then, eventually, they seem to also intend to take it one step farther and allow for arbitrary sync locations for that "primary replica" outside of their own cloud storage, which is even better and goes even further than Apple's implementation does.

If done well, I actually quite like the vision they're going for here. I'm still frustrated that they wouldn't just port the simple file backup feature from Android to the other platforms, even as just a stopgap until this is finished, but I think that the eventual completion of this feature as described will solve all of my major concerns with Signal's current storage implementation.


Okay so here's the argument I've heard: if arbitrary replacements of the lid sensor were possible, it would be feasible to create a tampered sensor that failed to detect the MacBook closing, thus preventing it from entering sleep mode.

This could then be combined with some software on the machine to turn a MacBook into a difficult to detect recording device, bypassing protections such as the microphone and camera privacy alerts, since the MacBook would be closed but not sleeping.

Additionally, since the auto-locking is also tied to triggering sleep mode, it would be possible to gain access to a powered off device, switch the sensors, wait for the user to attempt to sleep mode the device, and then steal it back, completely unlocked with full access to the drive.

Are these absolutely ridiculous, James Bond-tier threat assessments? Yes, absolutely. But they're both totally feasible (and not too far off from exploits I've heard about in real life), and both are completely negated by simply serializing the lid sensor.

Should Apple include an option, buried in recoveryOS behind authentication and disk unlock steps like the option to allow downgrades and allow kernel extensions, that enables arbitrary and "unauthorized" hardware replacements like this? Yes, they really should. If implemented correctly, it would not harm the security profile of the system while still preventing the aforementioned exploits.

There are good security reasons for a lot of what Apple does. They just tend to push a little too far beyond mitigating those security issues into doing things which start to qualify as vendor lock-in.

I really wish people would start to recognize where the line should be drawn, rather than organizing into "security of the walled garden" versus "freedom of choice" groups whenever these things get brought up. You can have both! The dichotomy itself is a fiction perpetuated to defend the status quo.


The line should be drawn by the owner of the device.

As the user and owner of the product, I should be the sole decider about my own security posture, not some company who doesn’t know my use case or needs.

It’s crazy how we’ve managed to normalize the manufacturer making these kinds of blanket decisions on our behalf.


Yes it’s wild. Imagine if we decided that people can’t be relied on to install good locks for their doors, so we gave the government responsibility for locking and unlocking your door every time you wanted to leave your house.

A lid sensor is just so peripheral. Where do the vendor lock-ins end?


Apple is a vendor, not a government.

A more accurate analogy, is like a lock installed on your door by a locksmith that uses proprietary parts available only through locksmiths. Which is exactly how a lot of locks work.

Proprietary technology exists in a lot of places, Apple didn't invent this.


> Apple is a vendor, not a government.

Apple is worse than a government. They have more money and reach than many governments and unlike many government officials, the public doesn't have the power to vote the heads of apple out of office or vote for who they want as a replacement.

Apple didn't invent proprietary technology, but they leverage the shit out of it in consumer hostile ways just to take even more money from people.


Governments have a monopoly on the use of force, and they exercise it to compel their citizens to do things whether or not they want to. For example, I have to pay taxes, and if I don't, they will use force against me.

Your relationship with Apple is very different. If you don't like Apple, you can just simply not buy or use their products. You have a choice and they have no way of compelling you otherwise.


The inability to use force doesn't make corporate power any less powerful--it only makes it a different kind of power. Yes, BigTech cannot arrest me or throw me in jail, but that doesn't mean that they don't wield other kinds of enormous power over my day-to-day life.

And unlike my (technically democratically elected government), corporations do not have to answer to the people they exert their power over.


I'm not trying to say that big tech doesn't have any sort of power at all that significant, of course they do. They certainly have a lot of control over information and how it shared. But I think that is unequivocally a lesser power than being able to imprison someone or put them to death. The fact that some small number of government officials are elected might be a rationale for that power, but it doesn't decrease it in any way.


The problem is that, with enough money, you can buy the people who have the power to imprison someone or worse.


Yea, that's a much better analogy. We don't want the lock vendor to decide how and when we lock our doors and how we fix them when they break. We don't want our stove vendor to decide what food we're allowed to cook, how many burners can be running at once, and what parts we use to repair it. We don't want our car manufacturer to decide where we can drive our car and who repairs it.

Yet, somehow, when it comes to technology products, we accept the manufacturer butting in to tell us how not to use them, and how not to repair them.


My stove, my car, and my locks are all opinionated in their design and use proprietary parts. None of them were designed to my personal requirements. Many of the products that I buy do in fact, not work exactly how I want them to, nor do they facilitate my desire to change them.

I can't name a single product in my house that uses any sort of open hardware design, except for the things, I've 3D printed or built myself.


A better analogue then would be that the developer who built your house insists on a specific type of lock.

There’s a whole repairability movement going on to maintain access to third party replacement parts for cars and appliances. This is a recent design choice that is being enforced by manufacturers. Historically, people have been able to repair everything they owned. Locking everything down is bad for consumers.


Developers normally do pick the parts that come on a house when they build it.

I understand arguments for repairability, and in most cases, I agree with them. But these things aren't boolean situations where things are either repairable or they are not. There's a lot of nuance in how things are designed and how repairable they are as an inherent part of that design. Ultimately, I agree that artificial lock-in for no reason other than that lock-in is a bad thing for consumers. But not everything is really that simple.

> Historically, people have been able to repair everything they owned.

It all depends on how you define "able". Most people lack technical ability to repair most things for thousands of years. And most things that you own today you are permitted to repair to the best of your ability.


I quite like this analogy, I hope I can remember it for the appropriate moment.


I dislike Apple's lock-in tactics, but I dislike gross fear-mongering exaggerations even more.

How'd we get to tyrannical government oversight from shitty corporate control? Sorry, I think I slipped on that slippery slope.

The better analogy would be "door lock vendor requires you to buy their door frame to make their door lock work with the security guarantees you chose to buy into."

Government should stay out of our private lives, but this kind of jumpy fear-mongering is what makes people lose focus, and when people are run by fear that's when the real psychopaths start taking advantage. Your fear mongering is creating the very government tyranny you're mongering about.


You mean like a prison?


> As the user and owner of the product, I should be the sole decider about my own security posture, not some company who doesn’t know my use case or needs.

It's not so cut and dry though. The "user" and the "owner" of a product are not always the same person, but hardware security impacts the "user" more than the "owner".


How does Apple know the owner of the product has authorized the HW change?

There’s a secondary argument you could make here whereby because the replacements must be valid Apple parts that have uniform behavior and tolerances, the strength of the secondary market is stronger and Apple products have a stronger resale value as a result, because you’re not going to encounter a MacBook with an arbitrary part replaced that you as the second-hand buyer know nothing about (this is why the secondary market for cars doesn’t work without the ability to lookup the car history by VIN).


Apple doesn't need to know. Once it's sold Apple is no longer the owner.


And when Apple designed their products, they get to decide how to design it.

You can do whatever you want with your computer. But nobody has to design it the way you like it.


What happens when you indirectly cause the machine to fail by installing some shout 3rd party part? Are you still going to claim warranty? Walk into an Apple Store to ask for help?



Huh? Explain more.


Generally speaking companies are not liable for failures due to the customer's own modifications to the product.


Practically speaking, however, they are liable for the time to service those customers and diagnose product issues to determine that the customer was at fault. And, that extends to any future buyers of used devices. And, any resulting displeasure from customers, even though it wasn't Apple's fault.

These sorts of things are exactly the types of problems that exist in the used car market, for exactly the same reasons.


Yep. That’s why I think Apple tends to lock down their spare parts.


What about a work computer? You're not the owner, but presumably you appreciate when can feel that your work computer is still secure.


If it's owned by the company then I don't care what they do since that's no longer my responsibility.


That car comparison doesn't work here. You can't be sure about the true history of a car, only what was reported.

When I replace a wheel bearing assembly in my driveway, you still can't see that by looking up my VIN. Nobody knows except myself and the person I bought the parts from.

Was it a dealer part? An OEM part? A poor quality replacement? Can't tell without looking.

This might actually support Apple's side of the argument, although I do not. I don't think we need some Carfax equivalent for MacBooks.


This might actually support Apple's side of the argument, although I do not. I don't think we need some Carfax equivalent for MacBooks.

In some ways, Apple's scheme is better than Carfax. In other ways, it's worse.

It's worse because you can't get access to the repair history of a device.

It's better because you can actually have a reasonable degree of confidence that no "driveway repairs" have taken place since Apple's scheme is not known to be broken.


I think we should stop using "driveway repairs" as a derogative term. There's nothing wrong with a car owner repairing their own car. Years ago, that was a very usual, normal thing to do. I replaced my own wheel bearings in my garage, and have been driving on them for 5 years. It's not that difficult, and doing it yourself doesn't make your car unsafe or defective.

Kind of scary how "repairing your own things yourself" has fallen so far out of fashion. We should be applauding and encouraging people to build these kind of skills, not insulting them.


I would have thought most people here are doing much more complicated work all day.

All four bearings are part of an assembly that bolts in. 8 or 12 bolts depending on position. I'm lucky that I don't even need a press.

The wheel comes off (5 bolts), the brakes come off (2 bolts), the axle/hub bolt comes out (1 bolt), and then on the front there are four bolts holding the assembly to the car. On the rear, nothing holds it on except that hub bolt.

Use a torque wrench to get them to spec. The kits came with new bolts. The axle bolts go on tight tight.


This is my biggest complaint with the strict "my device, my rules" people.

I want Apple to lock down my device to customization, repairs, etc..

I know I am never going to install an app through means other than the app store, even if I could. I know I'm never going to repair my device through anyone other than Apple, even if I could. I want to know that my device will be a $1,000 paperweight to anyone who steals it.

I want to pay Apple to ensure there are no "driveway repairs".

A number of years ago I accidentally ended up with a second hand iPhone with a shitty "fake" screen repair. I had no way of knowing it wasn't an Apple screen. But it fucked me over as soon as it started failing a couple months after I bought it.

I get tired of the people demanding that a company, with willing, paying customers, isn't allowed to protect their customers because they want something the company doesn't offer. Fuck right off with that shit and buy from a company that does offer that.


Apple aren’t aiming to protect buyers of pre-owned devices.

If they could get away with it, they’d likely prevent resale entirely.


Why would they? Lots of people sell their old phone to pay for the latest model. Killing the resale value will decrease new sales.


Resale through Apple only -

Apple already will give you discounts if you upgrade some things.

So the resale value will continue albeit at a fixed price


They offer a pittance compared to the 'normal' second hand market. If people can't get enough for the phone the upgrade will not be bought.


I feel your'e just mad because your expectations of buying a second hand phone were not met.


I had a similar experience myself paying for screen repair in SF and getting back a phone with a butchered display. Why wouldn’t you get mad for spending money and not having your expectations met?


This is solved by repair shop warranty and reputation.

They butchered your repair, you demand a fix or a compensation for a new phone. That's what customer protection laws are for.


If you need consumer protection laws then clearly reputation isn’t worth much. The issue with reputation is that society has grown so large and impersonal that we’re constantly facing interactions with unknown people.


I'm sorry for my candor, but your argument is so silly, it rubs me the wrong way.

Laws are how society operates.

If you need traffic rules (those are defined by laws fyi) then clearly individual's ability to drive isn't worth much. Let's abolish car ownership, make Apple operate all ground transportation and prohibit anyone else from deciding where Apple-operated cars go, what are operational hours and where the stops are.


> Let's abolish car ownership, make [car manufacturers] operate all ground transportation and prohibit anyone else from deciding where [manufacturer]-operated cars go, what are operational hours and where the stops are.

Shhhhhh! Don’t give them any more bad ideas or they might actually do it.


It wouldn't be difficult for Apple to add a page in the device settings that shows whether the device contains any non-genuine components.


Does your grandma decide her own “security posture”? Does she even know what that means?


Your grandma is not the target of state level spy rings...

The noise made about security is absolutely ridiculous.


She is however the target of pretty much every financial scam on the planet, many of which rely on convincing folks to hand over the keys to their (digital) castle...


Which financial scams involve such attacks, so is there a single scam that this measure would prevent?


I'm not aware of any that this particular sensor would mitigate. I think the idea that security is only for people targeted by nation-states is not a realistic view of the modern world (and, moreover, if we decide that normal people don't need enhanced security measures, it becomes trivial to identify dissidents by the fact that they implement security measures).


State hacking tech leaks to average hackers and scammers over time. Scammers today are using nation state tech from a decade ago.


My dude, an Indian is going to call your Apple-using grandmother and tell her that he works for "the Microsoft" and he needs her to give him all her banking details, or go to a bitcoin ATM, or buy a stack of $500 gift cards, and she's going to do it.

The sensor in her macbook lid does not matter! Get real.


Who are you to decide what matters and what doesn’t?

If you were a journalist reporting on russia or the UAE it would certainly matter.

Not to mention that it’s not that hard to imagine an AI tool being paired with 24/7 surveillance that reports back private information it hears.

It’s also not hard to imagine your average hackers getting their hands on a tool like that after a couple years of governments deploying it.


You're wack. Do you think a locked down laptop lid sensor will stop them from spiking your tea with polonium, or shooting you with a ricin BB, or breaking into your home when you're asleep and jabbing a needle into your neck while holding a pillow over your face, or kidnapping you and breaking your bones with a sledge hammer until they've gotten their rocks off?

This laptop lid threat is fantasy. Get fucking real.


both my grandmothers are dead


What’s the point of this comment?


It answers the question you asked.

Another answer, mine, is that one grandmother flew bombers, jets, spitfires, etc. in WWII and ran a post war international logistics company after that. The other did "stuff" with math.

ie. Both capable of understanding a security posture.

How about your grannies?

You might want to ask well formed questions in future, on a site such as HN the set of all grandmothers is hardly homogeneous.


You do get to decide (buy another product with a different value proposition).


It's not that crazy when people seem to cheer for a nanny state at every turn. Specially if said nanny state bombards them with propaganda about all the dangers they'll face if they just don't "comply".

1984 references may have seen farfetched but after the suppression of rights using covid as an excuse people have little to no recourse to claim control back. Apple was always famous for their walled garden and tight control, but we have Google becoming like apple (can't install things in your device unless you go to them with your private details), ID to track your movements because "protect the children" (effectively blocking news even), chat control (very similar to installing a camera in your home and recording all your conversations).

Corps and governments are relying on each other to strengthen their control and it's not a surprise.


Keeping a victim device unlocked when the lock state is responsible for encryption key state is a totally legitimate risk.

With that being said, I don’t think Apple see this specific part as a security critical component, because the calibration is not cryptographic and just sets some end point data. Apple are usually pretty good about using cryptography where they see real security boundaries.


Don't invent reasons for Apple to continue to have a stranglehold over their monopoly of critical computing infrastructure.

Companies as big as Apple and Google that provide such immensely important platforms and devices should have their hands tied by every major government's regulatory bodies to keep the hardware open for innovation without taxation and control.

We've gone from open computing to serfdom in the last 20 years, and it's only getting worse as these companies pile on trillions after trillions of nation state equivalent market cap.


The government regulators also have an interest in knowing the laptops they buy for eg the NSA have authenticated parts to avoid supply chain attacks.

If you're selling cell phones you already spend plenty of time satisfying regulators and vendors from all over the world. The cell phone companies aren't the ones with power here. (In general tech people have no political power because none of them have any social skills.)


Because the NSA is buying used laptops?


Supply chain attacks don't generally target the second hand market. Much more effective to upstream your attack to the vendor Apple buys parts from in China, and compromise every MacBook in one fell swoop


That's too discoverable to work. Supply chain attacks are by state actors who can interrupt specifically your order on its way to you and silently replace parts in it.


It doesn't need to be encrypted if it's one-time programmable. The calibration data is likely written into efuses which are physically burned and cannot be reset.


The sensor and its data stream would need to be authenticated, though.


For the mic cut-off? My understanding is that it outputs an electrical signal that's routed to the audio codec that literally prevents the audio from getting to system memory in the same way a physical switch would. It autonomously, at an electrical level, disconnects the mic without OS or software intervention. As it cannot be programmed again, you would have to crack open the laptop and modify the PCB to override it.


Oh, I understand now - you're right, OTP sensor data does protect against a real threat model I hadn't considered before:

* A remote attacker gains whatever privilege lets them get to the sensor SPI. * Without OTP calibration, the attacker could reprogram the sensor silently to report a different endstop, keeping the machine awake and the hard-cuts active. * With OTP calibration, this is closed.

So perhaps it is more security-related than I initially thought.

I was more considering the counterfeit part / supply chain / evil maid scenario, where the fact that Apple's sensors are OTP is meaningless (since a replacement sensor doesn't need to be, plus, you could just put a microcontroller pretending to be a sensor in there since there's no actual protection).

Thanks, you made me think again and figure it out!


A properly gated, user-authorized override in recoveryOS or similar would give advanced users and third-party repair shops a legitimate path without blowing up the security model


Then Apply tying the angle sensor to microphone status is a security issue. I would read that as a cheap excuse to be honest.


If repair shops can buy the $130 calibration machine, presumably the super spy in this story (who for some reason couldn't steal the data while they were replacing the lid sensor, nor can they steal the data when the laptop's in use, but somehow can steal the data when it's idle with the lid down) can also get a calibration machine, and then deliberately set the zero point incorrectly.


Yes.

“Sure, you can borrow my laptop. It’s fine. Take it home. I promise not to spy on you while the lid is closed. I promise not to record aaaaaany audio or anything! And I definitely won’t hear any conversation that contains information that I’ll use to stalk you later!”

There are a million ways that some nefarious person could spy on another, but at least this isn’t one of them.

And I am a very suspicious person, thanks to some eye opening experiences that I’ve had. When someone says that they want to do something that not a lot of people want to do, I immediately wonder how they will use that against myself or someone else. Because that has happened multiple times to me.

I also hate that I am suspicious of people who want to at least have the opportunity to fully own their devices; something that is perfectly reasonable to want, but I am. What would that additional ability do for them? What will they be capable of doing that they can’t do now? How and when will they use it to get what they want out of someone? Or out of me?

If you don’t think like this, I really envy you. For the longest time, every teacher, every supervisor, every commander, every non-familial authority figure I had until I was probably 35, used and manipulated me for the purpose of advancing themselves. Every single one. The ones in the military didn’t even attempt to hide it.

I’m so scarred because of people convincing me to help them screw me over that I no longer trust anyone who is concerned about things like laptop lid angle sensors. Because who are you trying to screw over and why does that angle sensor stand in your way?


> When someone says that they want to do something that not a lot of people want to do, I immediately wonder how they will use that against myself or someone else. Because that has happened multiple times to me.

I’m intrigued. Would you be comfortable sharing some of these real experiences here (with sensitive details fudged/removed)?


I'd rather not. They're very foggy memories now, and the ones that aren't are all attempted sexual abuse. Conmen are everywhere, and they will say things in the nicest most innocuous ways possible to sway you to do things for them. They'll do it over time, and they will very gradually ramp things up. "this is just a small change from that, what's the matter" ugh. people suck.


I think it's possible to advocate for device ownership and repair rights without having malicious intent


that is correct. my specific history pushes me in the direction where i suspect malevolence, though. yours might not. but let me tell you; people are absolutely capable of the worst things you can imagine, and if those people require your cooperation they will try the carrot long before they try the stick.


I mean nobody expected pager bombs, but here we are.


If you have access to my laptop long and deep enough to replace the hinge sensor with a fake one that prevents the lid from closing as a way to turn it into a recording device -- which of course would also require installing software on it -- instead of just putting a tiny microphone into it (or my bag), you are simultaneously a genius and dumb. And if you really are going to that level of effort, hoping that I don't notice my laptop failing to go to sleep when I close it so you might be able to steal it is crazy when you can 100% just modify the hardware in the keyboard to log my password.

Hell: what you really should do is swap my entire laptop with a fake one that merely shows me my login screen (which you can trivially clone off of mine as it happily shows it to you when you open it ;P) and asks for my password, at which point you use a cellular modem to ship it back to you. That would be infinitely easier to pull off and is effectively game over for me because, when the laptop unlocks and I don't have any of my data (bonus points if I am left staring at a gif of Nedry laughing, though if you showed an Apple logo of death you'd buy yourself multiple days of me assuming it simply broke), it will be too late: you'll have my password and can unlock my laptop legitimately.

> There are good security reasons for a lot of what Apple does.

So, no: these are clearly just excuses, sometimes used to ply users externally (such as yourself) and sometimes used to ply their own engineers internally (such as wherever you heard this), but these mitigations are simply so ridiculously besides the point of what they are supposedly actually securing that you simply can't take them seriously if you put more than a few minutes of thought into how they work... either the people peddling them are incompetent or malicious, and, even if you choose to believe the former over the latter, it doesn't make the shitty end result for the owner feel any better.


I can imagine a different attack vector: A malicious actor doing laptop repairs can absolutely replace the hinge sensor and install software on it. They could draw in people by offering cheaper prices, then steal their info or use it to setup more complex scams.

The counterpoint to this is that car body shops can also plant recording devices in your car. This is true, but the signal-to-noise ratio in terms of stealing valuable data is much lower. I don't have data to back this up, but I assume way more people use their laptops for online purchases and accessing their bank account than doing the same with phone calls in the car.


A repair worker can install software on it without replacing the sensor. Also add a tiny mic without installing the software. Or both.

I mean.. someone could replace your cars breakpads with pieces of wood or plastic, which would seemingly brake on the repair shop parking lot but fail horribly (burn and worse) when you needed them after. Somehow we still let people replace brake pads without having to program in the serial numbers.. for now.


Your laptop can be compromised during a trip to a foreign state, by state actors.

Travelling back you would notice a microphone, and would notice nothing on the laptop.


> This could then be combined with some software on the machine to turn a MacBook into a difficult to detect recording device, bypassing protections such as the microphone and camera privacy alerts, since the MacBook would be closed but not sleeping.

Isn't this already possible if the MB is connected to a power source like a portable battery?


Isn't there software that does exactly this? Called caffeine, I believe?


ITYM "caffeinate"

  DESCRIPTION
     caffeinate creates assertions to alter system sleep behavior.  If no
     assertion flags are specified, caffeinate creates an assertion to prevent
     idle sleep.  If a utility is specified, caffeinate creates the assertions
     on the utility's behalf, and those assertions will persist for the
     duration of the utility's execution. Otherwise, caffeinate creates the
     assertions directly, and those assertions will persist until caffeinate
     exits.


Installing software generally requires user permission. Replacing Hw can be done surreptitiously. At least that’s the strongman variant of the security argument.


`caffeinate` is installed by default on macOS.


those are over-complicated bollocks. there are easier and less detectable software only ways to do all that.


If you were to come up with one, I suspect you'd have a solid bug bounty waiting for you.


you just set the pc to not sleep on screen down? it is literally a feature


As far as I know the mic is still shut off when the machine is set to clamshell mode. That's the point. You cannot use the mic when the lid is closed. It's a hardware cut-off, you cannot configure it in software. Hence my comment about the bug bounty.


$5 USB mic?


If the point is to hide the recording that's not a great way. Especially when many corporate IT solutions monitor USB device connections.


How you can characterize this type of threat as a “James Bond” fantasy in 2025 is breathtaking.

The Federal government is forensically collecting phones during routine border crossings to see if you reposted Fat JD Vance memes. That’s publicly disclosed and well know.

I have no trouble believing that potential enemies of the state like the governor of California and his cabinet are bugged. If I were a person like that, I’d try to take supply chain countermeasures.


If we're talking Bond-tier assessments then Apple already sell a covert microphone: AirTags. They “have no microphone” according to product specs, but they do have a huge speaker, and a speaker and microphone are the same thing like a generator and motor are the same thing: https://in.bgu.ac.il/en/Pages/news/eaves_dropping.aspx


Just because a speaker can technically operate as a microphone doesn’t mean that AirTags would be capable of this. The speaker driver definitely doesn’t have any recording capability. The only reason the 3.5mm jack mentioned in your article is capable of this is because the jack has functionality to allow analog recording for mic/line in cases. No dedicated speaker driver would have this because it would be worthless and costly.


There’s a fairly large jump between having a microphone and being able to be used as a surveillance device.


Hmm…do you have the in-browser DNS over HTTPS resolver enabled? I personally can't reproduce this, but I'm using DoH with 1.1.1.1.

I've noticed that both Chrome and Firefox tend to have less consistent HTTP/3 usage when using system DNS instead of the DoH resolver because a lot of times the browser is unable to fetch HTTPS DNS records consistently (or at all) via the system resolver.

Since HTTP/3 support on the server has to be advertised by either an HTTPS DNS record or a cached Alt-Svc header from a previous successful HTTP/2 or HTTP/1.1 connection, and the browsers tend to prefer recycling already open connections rather than opening new ones (even if they would be "upgraded" in that case), it's often much trickier to get HTTP/3 to be used in that case. (Alt-Svc headers also sometimes don't cache consistently, especially in Firefox in my experience.)

Also, to make matters even worse, the browsers, especially Chrome, seem to automatically disable HTTP/3 support if connections fail often enough. This happened to me when I was using my university's Wi-Fi a lot, which seems to block a large (but inconsistent) amount of UDP traffic. If Chrome enters this state, it stops using HTTP/3 entirely, and provides no reasoning in the developer tools as to why (normally, if you enable the "Protocol" column in the developer tools Network tab, you can hover over the listed protocol to get a tooltip explaining how Chrome determined the selected protocol was the best option available; this tooltip doesn't appear in this "force disabled" state). Annoyingly, Chrome also doesn't (or can't) isolate this state to just one network, and instead I suddenly stopped being able to use HTTP/3 at home, either. The only actual solution/override to this is to go into about:flags (yes, I know it's chrome://flags now, I don't care) and make sure that the option for QUIC support is manually enabled. Even if it's already indicated as "enabled by default", this doesn't actually reflect the browser's true state. Firefox also similarly gives up on HTTP/3, but its mechanism seems to be much less "sticky" than Chrome's, and I haven't had any consistent issues with it.

To debug further: I'd first try checking to see if EncryptedClientHello is working for you or not; you can check https://tls-ech.dev to test that. ECH requires HTTPS DNS record support, so if that shows as working, you can ensure that your configuration is able to parse HTTPS records (that site also only uses the HTTPS record for the ECH key and uses HTTP/1.1 for the actual site, so it's fairly isolated from other problems). Next, you can try Fastly's HTTP/3 checker at https://http3.is which has the benefit of only using Alt-Svc headers to negotiate; this means that the first load will always use HTTP/2, but you should be able to refresh the page and get a successful HTTP/3 connection. Cloudflare's test page at https://cloudflare-quic.com uses both HTTPS DNS records and an Alt-Svc header, so if you are able to get an HTTP/3 connection to it first try, then you know that you're parsing HTTPS records properly.

Let me know how those tests perform for you; it's possible there is an issue in Firefox but it isn't occurring consistently for everyone due to one of the many issues I just listed.

(If anyone from Cloudflare happens to be reading this, you should know that you have some kind of misconfiguration blocking https://cloudflare-quic.com/favicon.ico and there's also a slight page load delay on that page because you're pulling one of the images out of the Wayback Machine via https://web.archive.org/web/20230424015350im_/https://www.cl... when you should use an "id_" link for images instead so the Internet Archive servers don't have to try and rewrite anything, which is the cause of most of the delays you typically see from the Wayback Machine. (I actually used that feature along with Cloudflare Workers to temporarily resurrect an entire site during a failed server move a couple of years back, it worked splendidly as soon as I learned about the id_ trick.) Alternatively, you could also just switch that asset back to https://www.cloudflare.com/img/nav/globe-lang-select-dark.sv... since it's still live on your main site anyway, so there's no need to pull it from the Wayback Machine.)

I've spent a lot of time experimenting with HTTP/3 and it's weird quirks over the past couple of years. It's a great protocol, it just has a lot of bizarre and weirdly specific implementation and deployment issues.


> If anyone from Cloudflare happens to be reading this, you should know that you have some kind of misconfiguration

Thanks for the detailed information. I'm a someone from Cloudflare responsible for this, we'll get it looked at.


Great details; thanks!

> Hmm…do you have the in-browser DNS over HTTPS resolver enabled? I personally can't reproduce this, but I'm using DoH with 1.1.1.1.

Yes, using DoH and Cloudflare (1.1.1.1). Have also tried it with 1.1.1.1 turned off; no differences.

As for the other suggestions, my results were the same with Firefox on both macOS and Fedora Linux:

- https://tls-ech.dev - EncryptedClientHello works on first try.

- https://http3.is - HTTP/3 works on second or third soft refresh.

- https://cloudflare-quic.com - (This is the one I reported initially) Stays at HTTP/2 despite numerous refreshes, soft or hard.


> What they're really afraid of is that people will read content using LLM inference and make all the ads and nags and "download the app for a crap experience" go away -- and never click on ads accidentally for an occasional ka-ching.

See, I don't think this is right either. Back during the original API protests, several people (including me!) pointed out that if the concern was really that third-party apps weren't contributing back to Reddit (which was a fair point: Apollo never showed ads of any kind, neither Reddit's or their own) then a good solution would be to make using third-party apps require paying for Reddit Premium. Then they wouldn't have to audit all of the apps to ensure they were displaying ads correctly and would be able to collect revenue outside of the inherent limitations of advertising.

Theoretically, this should have been a straight win for Reddit, especially given the incredibly low income that they've apparently been getting from ads anyway (I can't find the report now so the numbers might not be exact, but I remember it being reported that Reddit was pulling in something like ~$0.60 per user per month versus Twitter's slightly better ~$8 per user per month and Meta's frankly mindblowing ~$50 per user per month) but it was immediately dismissed out of hand in favor of their way more complicated proposal that app developers audit their own usage and then pay Reddit back.

My initial thoughts were either that the Reddit API was so broken that they couldn't figure out how to properly implement the rate limits or payment gating needed for the other strategy (even now the API still doesn't have proper rate limits, they just commence legal action anyone they find abusing it rather than figure out how to lock them out; the best they can really do is the sort of basic IP bans they're using here), or the Reddit higher-ups were so frustrated that Apollo had worked out a profitable business model before them that they just wanted to deploy a strategy targeted specifically at punishing them.

But it quickly became clear later that Reddit genuinely wasn't even thinking about third-party apps. They saw dollar signs from the AI boom, and realized that Reddit was one of the largest and most accessibly corpuses of generally-high-quality text on a wide variety of topics, and AI companies were going to need that. Google showing an intense dependency on Reddit during the blackout didn't hurt either (yes, at this point I genuinely believe the blackout actually hurt more than it helped by giving Reddit further leverage to use on Google, hence why they were one of the first to sign a crawler deal afterwards).

So they decided to use any method they could think of to lock down access to the platform while keeping enough people around that the Reddit platform was still mostly decent enough to be usable for AI training and pivoted much of their business to selling data. All of this while claiming, as they're still doing today with the Internet Archive move, that this is somehow a "privacy measure" meant to ensure deleted comments aren't being archived anywhere.

The same thing basically happened with Stack Exchange, except they had much less leverage over their community because the entire site was previously CC licensed and they didn't have any real authority to override that beyond making data access really annoying.

The good news is that it really does seem like "injest everything" big model AI is the least likely to survive at this point. Between ChatGPT scaling things down massively to save on costs with the GPT-5 update and the Chinese models somehow making do with less data and slower chips by just using better engineering techniques, I highly doubt these economics around AI are going to last. The bad news is that, between stuff like this and the GitHub restructuring today, I don't thing Big Tech has any plans on how they're going to continue functioning in an economy that isn't entirely based on AI hype. And that's really concerning.


The infamous Dropbox comment[0] actually didn't even cite rsync; it recommended getting a remote FTP account, using curlftpfs to mount it locally, and then using SVN or CVS to get versioning support.

The double irony of that comment is that pretty much all of those technologies listed are obsolete now while Dropbox is still going strong: FTP has been mostly replaced with SFTP and rsync due to its lack of encryption and difficult to manage network architecture, direct mounting of remote hosts still happens but it's more typical in my experience to have local copies of everything that are then synced up with the remote host to provide redundancy, and CVS and SVN have been pretty much completely replaced with Git outside of some specialist and legacy use cases.

The "evaluating new products" xkcd[1] is extremely relevant, as is the continued ultra-success of Apple: developing new technologies, and then turning around and marketing those technologies to people who aren't already in this field working on them are effectively two completely different business models.

[0]: https://news.ycombinator.com/item?id=9224 [1]: https://xkcd.com/1497/


It's also not the same thing as Dropbox was offering: that's a description of a network drive, but the key thing about Dropbox is that it's a syncing engine. It's a much harder thing to do, but with very big benefits: much faster (since it's just reading off disk) and offline access.


The desktop issue was an intentional change that happened sometime in like 2017 or so.

The original functionality of the quality selector was to throw out whatever video had been buffered and start redownloading the video in the newly selected quality. All well and good, but that causes a spinning circle until enough of the new video arrives.

The "new" functionality is to instead keep the existing quality video in the buffer and have all the new video coming in be set to the new quality. The idea is that you would have the video playing, change the quality, and it keeps playing until a few seconds later the new buffer hits and you jump up to the new quality level. Combined with the fact that YouTube only buffers a few seconds of video (a change made a few years prior to this; back in the Flash era YouTube would just keep buffering until you had the entire video loaded, but that was seen as a waste of both YouTube's bandwidth and the user's since there was always the possibility of the user clicking off the video; the adoption of better connection speeds, more efficient video codecs, and widespread and expensive mobile data caps led to that being seen as the better behavior for most people) and for most people, changing quality is a "transparent" operation that doesn't "interrupt" the video.

In general, it's a behavior that seems to come from the fairly widespread mid-2010s UX theory that it's better to degrade service or even freeze entirely than to show a loading screen of some kind. It can also be seen in Chrome sometimes on high-latency connections: in some cases, Chrome will just stop for a few moments while performing DNS resolution or opening the initial connections rather than displaying the usual "slow light gray" loading circle used on that step, seemingly because some mechanism within Chrome has decided that the requests will probably return quickly enough for it to not be an issue. YouTube Shorts on mobile also has similar behavior on slow connections: the whole video player will just freeze entirely until it can start playing the video with no loading indicator whatsoever. Another example is Gmail's old basic HTML interface versus the modern AJAX one: an article which I remember reading, but can't find now found that for pretty much every use case the basic HTML interface was statistically faster to load, but users subjectively felt that the AJAX interface was faster, seemingly just because it didn't trigger a full page load when something was clicked on.

And, I mean, they're kind of right. It's nerds like us that get annoyed when the video quality isn't updated immediately, the average consumer would much rather have the video "instantly load" rather than a guarantee that the video feed is the quality you actually selected. It's the same kind of thought process that led to the YouTube mobile app getting an unskippable splash screen animation last year; to the average person, it feels like the app loads much faster now. It doesn't, of course, it's just firing off the home page requests in the background while the locally available animation plays, but the user sees a thing rather than a blank screen while it loads, which tricks the brain into thinking it's loading faster.

This is also why Google's Lighthouse page loading speed algorithm prioritizes "Largest Contentful Paint" (how long does it take to get the biggest element on the page rendered), "Cumulative Layout Shift" (how much do things move around on the page while loading), and "Time to Interactive" (how long until the user can start clicking buttons) rather than more accurate but "nerdy" indicators like Time to First Byte (how long until the server starts sending data) or Last Request Complete (how long until all of the HTTP requests on a page are finished; for most modern sites, this value is infinity thanks to tracking scripts).

People simply prefer for things to feel faster, rather than for things to actually be faster. And, luckily for Internet companies, the former is usually much easier to achieve than the latter.


> In general, it's a behavior that seems to come from the fairly widespread mid-2010s UX theory that it's better to degrade service or even freeze entirely than to show a loading screen of some kind.

> It's the same kind of thought process that led to the YouTube mobile app getting an unskippable splash screen animation last year; to the average person, it feels like the app loads much faster now. It doesn't, of course, it's just firing off the home page requests in the background while the locally available animation plays, but the user sees a thing rather than a blank screen while it loads, which tricks the brain into thinking it's loading faster.

So they decided it's better to show lower-quality content (or not update the screen) than a loading screen, and it's the same school of thought that led to a loading screen being implemented? I agree both examples could be seen as intended to make things "feel" faster, but it seems like two different philosophies towards that.

(Also, I remember when quality changes didn't take effect immediately, but I've been seeing them take effect immediately and discard the buffer for at least the past few years-- at least when going from "Auto" that it always selects for me to the highest-available quality.)


> The idea is that [...] a few seconds later the new buffer hits and you jump up to the new quality level.

Except "a few seconds later" can become minutes. Sometimes it just keeps going at the lower quality while the UI claims to play a noticeably higher resolution than the one actually playing. To be clear, I don't care that the "automatic" quality is actually automatic, I care that the label blatantly lies about which resolution is playing. "Automatic (1080p60)" shouldn't look like a video from 2005.


> The actual scary stuff is the dilution of expertise, we contributed for a long time to share our knowledge for internet points (stack overflow, open source projects, etc), and it has been harvested by the AIs already, anyone that pays access to these services for tens of dollars a month can bootstrap really quickly and do what it might had needed years of expertise before.

What scares me more is the opposite of that: information scarcity leading to less accessible intelligence on newer topics.

I’ve completely stopped posting on Reddit since the API changes, and I was extremely prolific before[1] because I genuinely love writing about random things that interest me. I know I’m not the only one: anecdotally, the overall quality of content on Reddit has nosedived since the change and while there doesn’t seem to be a drop in traffic or activity, data seems to indicate that the vast majority of activity these days is disposable meme content[2]. This seems to be because they’re attempting desperately to stick recommendation algorithms everywhere they can, which are heavily weighted toward disposable content since people view more of it. So even if there were just as many long discussion posts like before, they’re not getting surfaced nearly as often. And discussion quality when it does happen has noticeably dipped as well: the Severance subreddit has regularly gotten posts and comments where people question things that have already been fully explained in the series itself (not like subtext kind of things, like “a character looked at the camera and blatantly said that in the episode you’re talking about having just watched” things). Those would have been heavily downvoted years ago, now they’re the norm.

But if LLMs learn from the in-depth posting that used to be prominent across the Internet, and that kind of in-depth posting is no longer present, a new problem presents itself. If, let’s say, a new framework releases tomorrow and becomes the next big thing, where is ChatGPT going to learn how that framework works? Most new products and platforms seem to centralize their discussion on Discord, and that’s not being fed into any LLMs that I’m aware of. Reddit post quality has nosedived. Stack Overflow keeps trying to replace different parts of its Q&A system with weird variants of AI because “it’s what visitors expect to see these days.” So we’re left with whatever documentation is available on the open Internet, and a few mediocre-quality forum posts and Reddit threads.

An LLM might be able to pull together some meaning out of that data combined with the existing data it has. But what about the framework after that? And the language after that? There’s less and less information available each time.

“Model collapse” doesn’t seem to have panned out: as long as you have external human raters, you can use AI-generated information in training. (IIRC the original model collapse discussions were the result of AI attempting to rate AI generated content and then feed right back in; that obviously didn’t work since the rater models aren’t typically any better than the generator models.) But what if the “data wells” dry up eventually? They can kick the can down the road for a while with existing data (for example LLMs can relate the quirks of new languages to the quirks of existing languages, or text to image models can learn about characters from newer media by using what it already knows about how similar characters look as a baseline), but eventually quality will start to deteriorate without new high-quality data inputs.

What are they gonna do then when all the discussion boards where that data would originate are either gone or optimized into algorithmic metric farms like all the other social media sites?

[1]: https://old.reddit.com/user/Nathan2055

[2]: I can’t find it now, but there was an analysis about six months ago that showed that since the change a significant majority of the most popular posts in a given month seem to originate from /r/MadeMeSmile. Prior to the API change, this was spread over an enormous number of subreddits (albeit with a significant presence by the “defaults” just due to comparative subscriber counts). While I think the subreddit distribution has gotten better since then, it’s still mostly passive meme posts that hit the site-wide top pages since the switchover, which is indicative of broader trends.


> What are they gonna do then when all the discussion boards where that data would originate are either gone or optimized into algorithmic metric farms like all the other social media sites?

As people are using AI more and more for coding and problems solving providing company can keep records and train on them. I.e. if person 1 solved the problem of doing 2 on product 3 then when 4 is trying to do the same it can be either already trained into the model or model can lookup similar problems and solutions. This way the knowledge isn't gone or isolated, it's being saved and reused. Ideally it requires permission from the user, but price cuts can be motivating. Like all main players today have free versions which can collect interaction data. With millions of users it's much more than online forums have.


This is why I believe that Bluesky and the AT protocol is a significantly more attractive system than Mastodon and ActivityPub. Frankly, we’ve tried the kind of system ActivityPub offers before: a decentralized server network ultimately forming one big system, and the same problems have inevitably popped up every time.

XMPP tried to do it for chat. All the big players adopted it and then either realized that the protocol wasn’t complex enough for the features they wanted to offer or that it was much better financially to invest in a closed system. Sometimes both. The big providers split off into their own systems (remember, Google Talk/Hangouts/Chat and Apple iChat/FaceTime both started out as XMPP front-ends) and the dream of interconnected IMing mostly died.

RSS tried to do it for blogs. Everyone adopted it at first, but eventually content creators came to the realization that you can’t really monetize sending out full-text posts directly in any useful way without a click back to the originating site (mostly defeating the purpose), content aggregators realized that offering people the option to use any front-end they wanted meant that they couldn’t force profitable algorithmic sorts and platform lock-in, and users overwhelmingly wanted social features integrated into their link aggregators (which Google Reader was famously on the cusp of implementing before corporate opted to kill it in favor of pushing people to Google+; that could have potentially led to a very different Internet today if it had been allowed to release). The only big non-enthusiast use of RSS that survives is podcasts, and even those are slowly moving toward proprietary front-ends like Spotify.

Even all the way back to pre-Web protocols: IRC was originally a big network of networks where every server could talk to every other server. As the system grew, spam and other problems began to proliferate, and eventually almost all the big servers made the decision to close off into their own internal networks. Now the multi-server architecture of IRC is pretty much only used for load balancing.

But there’s two decentralized systems that have survived unscathed: the World Wide Web over HTTP and email over SMTP. Why those two? I believe that it’s because those systems are based on federated identities rather than federated networks.

If you have a domain name, you can move the website attached to it to any publicly routable server and it still works. Nobody visiting the website even sees a difference, and nobody linking to your website has to update anything to stay “connected” to your new server. The DNS and URL systems just work and everyone just locates you automatically. The same thing with email: if you switch providers on a domain you control, all the mail still keeps being routed to you. You don’t have to notify anyone that anything has changed on your end, and you still have the same well-known name after the transition.

Bluesky’s killer feature is the idea of portable identities for social media. The whole thing just ties back to a domain name: either one that you own or a subdomain you get assigned from a provider. That means that picking a server isn’t something the average person needs to worry about, you can just use the default and easily change later if you want to and your entire identity just moves with you.

If the server you’re on evaporates, the worst thing that you lose is your activity, and that’s only if you don’t maintain any backups somewhere else. For most people, you can just point your identity at a different server, upload a backup of your old data, and your followers don’t even know anything has changed. A sufficiently advanced client could probably even automate all of the above steps and move your whole identity elsewhere in one click.

Since the base-level object is now a user identity rather than a server, almost all of the problems with ActivityPub’s federation model go away. You don’t deal with blocking bad servers, you just block bad people (optionally using the same sorts of “giant list” mechanisms already available for places like Twitter). You don’t have to deal with your server operator getting themself blacklisted from the rest of the network. You don’t have to deal with your server operator declaring war on some other server operator and suddenly cutting you off from a third of your followers.

People just publish their posts to a server of their choice, others can fetch those posts from their server, the server in question can be moved wherever without affecting anything for those other users, and all of the front-end elements like feed algorithms, post display, following lists and block lists, and user interface options could either be handled on the client-side or by your choice of (transferable) server operator. Storage and bandwidth costs for text and (reasonable) images are mostly negligible at scale, and advertising in clients, subscription fees, and/or offering ancillary services like domain registration could easily pay for everything.

ActivityPub sounds great to nerds who understand all of this stuff. But it’s too complicated for the average social media user to use, and too volatile for large-scale adoption to take off.

AT protocol is just as straightforward to understand as email (“link a website domain if you already have one or just register for a free one on the homepage, and you can easily change in the future”), doesn’t require any special knowledge to utilize, and actually separates someone’s identity and content from the person running the server. Mastodon is 100 tiny Twitters that are somewhat connected together, AT actually lets everyone have their own personal Twitter and connect them all together in a way that most people won’t even notice.


Good post of historical reminders and I appreciate the framing of bluesky's identity approach. Never was sold on Fediverse/ActivityPub as being it and not a fan yet of Bluesky's slow-building-in-public approach but am intrigued by this key facet of the main role the personal domain takes. How can one easily change/migrate their AT identity if they change domains? How is their whole social history transferrable? Like that was one of the problems/unclear things to most about Mastodon - that it actually wasn't that easy to move instances because sure your identity could move but your posts would be on the old instance, so it wasn't really that portable. I'm all about the permanence and data preservation, so I don't want to commit to a platform now without assured control over my data and ability to maintain history/identity in a move. Have enjoyed the centralization and longevity for too long on a place like Twitter to get all loose and ephemeral now.


To change domains, you:

Go into settings, click change handle.

Type in the domain you wish to change to. Click next.

It’ll give you some stuff to put into a DNS TXT entry on that domain. Do that. Click “verify DNS record.”

And that’s it. You’re done. Everything is “transferred.”

The history is transferable for the same reason a domain is transferable to another web host: what does URL stand for again? Uniform resource locator? That is, it’s how you locate something, not what that something is. In this case, the domain isn’t actually your identity: your identity is your DID, “decentralized identifier.” To hand wave slightly, all your content is signed with your DID information, not the URL you use. There’s a service that resolves domains to DIDs. So changing your domain means changing what that service resolves to. That’s why I put “transferred” in quotes above; when changing domains, nothing actually moves.

Now, if you want to change the server where your data is hosted, your PDS, it’s effectively the same thing: you spin up a new server, backfill your data by a backup or by replaying it from the network, and then say “hey here’s a new PDS” to the network.

All of this is possible because of the fundamental design choices atproto makes over the ones ActivityPub does.

Happy to answer more questions. But if data ownership and preservation is a thing for you, you should like atproto.


> Having to visit fifty personal sites would be a pain.

Which is why RSS (and Atom, but I’m just saying RSS because it’s less to type) was such a brilliant invention, and also why it was “killed.”

Everyone is talking about things like “ActivityPub” and “interoperability” and “personalized algorithms” nowadays but RSS supported many of those features twenty years ago.

Yeah, it didn’t solve the account portability problem (you’d still need a separate account for each forum and blog you wanted to comment on; OpenID almost solved this issue but was a nightmare to work with, Mozilla Persona (which is not the same thing as Mozilla Personas, wow that company is bad at naming things) would have definitely solved this issue if that company had spent more than twenty minutes promoting it), but it did solve the actual fundamental issue that most people seem to be getting at with these modern systems: it offered a way to collate and display updates from a wide variety of mutually incompatible Internet sources all together in one place.

It’s an incredible simple pitch, even to non-technical people: display your YouTube subscriptions, Twitter follows, blogs you’re interested in, and news sites that you read all in one place, in software that you control.

The problem is that operating a “platform” rather than a website got to be too profitable, and suddenly the goal shifted from serving useful content to make you want to come back to a site to serving enough content that you never want to leave to begin with. Many people believe that if Google had made Reader the center of their social strategy rather than killing it to pursue a short-sighted attempt to compete directly with Facebook, we could be looking at a much healthier Internet today (and Google probably could be earning a lot more money than they currently are, considering the abysmal adoption rate of modern Google services is often argued to be directly linked to fear of shutdown).[1]

Personal websites died for the mainstream because Facebook, Twitter, and Instagram offered a better interface for the average consumer. But they could be brought back by a system that made the good parts of those sites interoperable. Frankly, this is the kind of thing that I want to see Mozilla pursuing again, not...whatever the heck they’re doing now. (You go their website and they’re selling Pocket, which is basically a bad centralized version of what I’m talking about; a rebadged VPN service; an email alias service; and Firefox. What happened to the people who tried to do things like Persona?)

[1]: https://www.theverge.com/23778253/google-reader-death-2013-r...


> Everyone is talking about things like “ActivityPub” and “interoperability” and “personalized algorithms” nowadays but RSS supported many of those features twenty years ago.

RSS is read-only, ActivityPub allows back-and-forth interaction between servers. The two are not comparable.


Do normal people give a shit?

Like if you ask a cashier at McDonald’s, will they have any clue what you’re talking about?

Walk into an office. Ask the receptionist, are they going to care at all?

Do commenters on hacker news ever have conversations with the bulk of humanity in order to have any perspective?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: