Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

Rust is over 10 years old now. It has a track record of delivering what it promises, and a very satisfied growing userbase.

OTOH static analyzers for C have been around for longer than Rust, and we're still waiting for them to disprove Rice's theorem.

AI tools so far are famous for generating low-quality code, and generating bogus vulnerability reports. They may eventually get better and end up being used to make C code secure - see DARPA's TRACTOR program.


The applicability of Rice's theorem with respect to static analysis or abstract interpretation is more complex than you implied. First, static analysis tools are largely pattern-oriented. Pattern matching is how they sidestep undecidability. These tools have their place, but they aren't trying to be the tooling you or the parent claim. Instead, they are more useful to enforce coding style. This can be used to help with secure software development practices, but only by enforcing idiomatic style.

Bounded model checkers, on the other hand, are this tooling. They don't have to disprove Rice's theorem to work. In fact, they work directly with this theorem. They transform code into state equations that are run through an SMT solver. They are looking for logic errors, use-after-free, buffer overruns, etc. But, they also fail code for unterminated execution within the constraints of the simulation. If abstract interpretation through SMT states does not complete in a certain number of steps, then this is also considered a failure. The function or subset of the program only passes if the SMT solver can't find a satisfactory state that triggers one of these issues, through any possible input or external state.

These model checkers also provide the ability for user-defined assertions, making it possible to build and verify function contracts. This allows proof engineers to tie in proofs about higher level properties of code without having to build constructive proofs of all of this code.

Rust has its own issues. For instance, its core library is unsafe, because it has to use unsafe operations to interface with the OS, or to build containers or memory management models that simply can't be described with the borrow checker. This has led to its own CVEs. To strengthen the core library, core Rust developers have started using Kani -- a bounded model checker like those available for C or other languages.

Bounded model checking works. This tooling can be used to make either C or Rust safer. It can be used to augment proofs of theorems built in a proof assistant to extend this to implementation. The overhead of model checking is about that of unit testing, once you understand how to use it.

It is significantly less expensive to teach C developers how to model check their software using CBMC than it is to teach them Rust and then have them port code to Rust. Using CBMC properly, one can get better security guarantees than using vanilla Rust. Overall, an Ada + Spark, CBMC + C, Kani + Rust strategy coupled with constructive theory and proofs regarding overall architectural guarantees will yield equivalent safety and security. I'd trust such pairings of process and tooling -- regardless of language choice -- over any LLM derived solutions.


Sure it's possible in theory, but how many C codebases actually use formal verification? I don't think I've seen a single one. Git certainly doesn't do anything like that.

I have occasionally used CBMC for isolated functions, but that must already put me in the top 0.1% of formal verification users.


It's not used more because it is unknown, not because it is difficult to use or that it is impractical.

I've written several libraries and several services now that have 100% coverage via CBMC. I'm quite experienced with C development and with secure development, and reaching this point always finds a handful of potentially exploitable errors I would have missed. The development overhead of reaching this point is about the same as the overhead of getting to 80% unit test coverage using traditional test automation.


You're describing cases in which static analyzers/model checkers give up, and can't provide a definitive answer. To me this isn't side-stepping the undecidability problem, this is hitting the problem.

C's semantics create dead-ends for non-local reasoning about programs, so you get inconclusive/best-effort results propped up by heuristics. This is of course better than nothing, and still very useful for C, but it's weak and limited compared to the guarantees that safe Rust gives.

The bar set for Rust's static analysis and checks is to detect and prevent every UB in safe Rust code. If there's a false positive, people file it as a soundness bug or a CVE. If you can make Rust's libstd crash from safe Rust code, even if it requires deliberately invalid inputs, it's still a CVE for Rust. There is no comparable expectation of having anything reliably checkable in C. You can crash stdlib by feeding it invalid inputs, and it's not a CVE, just don't do that. Static analyzers are allowed to have false negatives, and it's normal.

You can get better guarantees for C if you restrict semantics of the language, add annotations/contracts for gaps in its type system, add assertions for things it can't check, and replace all the C code that the checker fails on with alternative idioms that fit the restricted model. But at that point it's not a silver bullet of "keep your C codebase, and just use a static analyzer", but it starts looking like a rewrite of C in a more restrictive dialect, and the more guarantees you want, the more code you need to annotate and adapt to the checks.

And this is basically Rust's approach. The unsafe Rust is pretty close to the semantics of C (with UB and all), but by default the code is restricted to a subset designed to be easy for static analysis to be able to guarantee it can't cause UB. Rust has a model checker for pointer aliasing and sharing of data across threads. It has a built-in static analyzer for memory management. It makes programmers specify contracts necessary for the analysis, and verifies that the declarations are logically consistent. It injects assertions for things it can't check at compile time, and gives an option to selectively bypass the checkers for code that doesn't fit their model. It also has a bunch of less rigorous static analyzers detecting certain patterns of logic errors, missing error handling, and flagging suspicious and unidiomatic code.

It would be amazing if C had a static analyzer that could reliably assure with a high level of certainty, out of the box, that a heavily multi-threaded complex code doesn't contain any UB, doesn't corrupt memory, and won't have use-after-free, even if the code is full of dynamic memory (de)allocations, callbacks, thread-locals, on-stack data of one thread shared with another, objects moved between threads, while mixing objects and code from multiple 3rd party libraries. Rust does that across millions lines of code, and it's not even a separate static analyzer with specially-written proofs, it's just how it works.

Such analysis requires code with sufficient annotations and restricted to design patterns that obviously conform to the checkable model. Rust had a luxury of having this from the start, and already has a whole ecosystem built around it.

C doesn't have that. You start from a much worse position (with mutable aliasing, const that barely does anything, and a type system without ownership or any thread safety information) and need to add checks and refactor code just to catch up to the baseline. And in the end, with all that effort, you end up with a C dialect peppered with macros, and merely fix one problem in C, without getting additional benefits of a modern language.

CBMC+C has a higher ceiling than vanilla Rust, and SMT solvers are more powerful, but the choice isn't limited to C+analyzers vs only plain Rust. You still can run additional checkers/solvers on top of everything Rust has built-in, and further proofs are easier thanks to being on top of stronger baseline guarantees and a stricter type system.


If we mark any case that might be undecidable as a failure case, and require that code be written that can be verified, then this is very much sidestepping undecidability by definition. Rust's borrow checker does the same exact thing. Write code that the borrow checker can't verify, and you'll get an error, even if it might be perfectly valid. That's by design, and it's absolutely a design meant to sidestep undecidability.

Yes, CBMC + C provides a higher ceiling. Coupling Kani with Rust results in the exact same ceiling as CBMC + C. Not a higher one. Kani compiles Rust to the same goto-C that CBMC compiles C to. Not a better one. The abstract model and theory that Kani provides is far more strict that what Rust provides with its borrow checker and static analysis. It's also more universal, which is why Kani works on both safe and unsafe Rust.

If you like Rust, great. Use it. But, at the point of coupling Kani and Rust, it's reaching safety parity with model checked C, and not surpassing it. That's fine. Similar safety parity can be reached with Ada + Spark, C++ and ESBMC, Java and JBMC, etc. There are many ways of reaching the same goal.

There's no need to pepper C with macros or to require a stronger type system with C to use CBMC and to get similar guarantees. Strong type systems do provide some structure -- and there's nothing wrong with using one -- but unless we are talking about building a dependent type system, such as what is provided with Lean 4, Coq, Agda, etc., it's not enough to add equivalent safety. A dependent type system also adds undecidability, requiring proofs and tactics to verify the types. That's great, but it's also a much more involved proposition than using a model checker. Rust's H-M type system, while certainly nice for what it is, is limited in what safety guarantees it can make. At that point, choosing a language with a stronger type system or not is a style choice. Arguably, it lets you organize software in a better way that would require manual work in other languages. Maybe this makes sense for your team, and maybe it doesn't. Plenty of people write software in Lisp, Python, Ruby, or similar languages with dynamic and duck typing. They can build highly organized and safe software. In fact, such software can be made safe, much as C can be made safe with the appropriate application of process and tooling.

I'm not defending C or attacking Rust here. I'm pointing out that model checking makes both safer than either can be on their own. As with my original reply, model checking is something different than static analysis, and it's something greater than what either vanilla C or vanilla Rust can provide on their own. Does safe vanilla Rust have better memory safety than vanilla C? Of course. Is it automatically safe against the two dozen other classes of attacks by default and without careful software development? No. Is it automatically safe against these attacks with model checking? Also no. However, we can use model checking to demonstrate the absence of entire classes of bugs -- each of these classes of bugs -- whether we model check software written in C or in Rust.

If I had to choose between model checking an existing codebase (git or the Linux kernel), or slowly rewriting it in another language, I'd choose the former every time. It provides, by far, the largest gain for the least amount of work.


People innately admire difficult skills, regardless of their usefulness. Acrobatic skateboarding is impressive, even when it would be faster and safer to go in a straight line or use a different mode of transport.

To me skill and effort is misplaced and wasted when it's spent on manually checking invariants that a compiler could check better automatically, or implementing clever workarounds for language warts that no longer provide any value.

Removal of busywork and pointless obstacles won't make smart programmers dumb and lazy. It allows smart programmers to use their brainpower on bigger more ambitious problems.


These type comments always remind me that we forget where we come from in terms of computation, every time.

It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.

It's easy to publicly shame people who do hard things for a long time in the light of newer tools. However, many people who likes these languages are using them longer than the languages we champion today were mere ideas.

I personally like Go in these days for its stupid simplicity, but when I'm going to do something serious, I'll always use C++. You can fight me, but never pry C++ from my cold, dead hands.

For note, I don't like C & C++ because they are hard. I like them because they provide a more transparent window the processor, which is a glorified, hardware implemented PDP-11 emulator.

Last, we shall not forget that all processors are C VMs, anyway.


> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.

The core of the borrow checker was being formulated in 2012[1], which is 13 years ago. No infeasibility then. And it's based on ideas that are much older, going back to the 90s.

Plus, you are vastly overestimating the expense of borrow checking, it is very fast, and not the reason for Rust's compile times being slow. You absolutely could have done borrow checking much earlier, even with less computing power available.

1: https://smallcultfollowing.com/babysteps/blog/2012/11/18/ima...


> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.

IIRC borrow checking usually doesn't consume that much compilation time for most crates - maybe a few percent or thereabouts. Monomorphization can be significantly more expensive and that's been much more widely used for much longer.


> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.

I think you're setting the bar a little too high. Rust's borrow-checking semantics draw on much earlier research (for example, Cyclone had a form of region-checking in 2006); and Turbo Pascal was churning through 127-character identifiers on 8088s in 1983, one year before C++ stream I/O was designed.

EDIT: changed Cyclone's "2002" to "2006".


I remember, I was there in the 1980's coding, hence why I know C and C++ were not the only alternatives, rather the ones that eventually won in the end.

> the processor, which is a glorified, hardware implemented PDP-11 emulator.

This specific seems like just gratuitously rewriting history.

I can get how you'd feel C (and certain dialects of C++) are "closer to the metal" in a certain sense: C supports very few abstractions and with fewer abstractions, there are less "things" between you and "the metal". But this is as far as it goes. C does not represent - by any stretch of imagination - an accurate computational model or a memory of a modern CPU. It does stay close to PDP-11, but calling modern CPUs "glorified hardware emulators of PDP-11" is just preposterous.

PDP-11 was an in-order CISC processor with no virtual memory, cache hierarchy, branch prediction, symmetric multiprocessing and SIMD instruction. Some modern CPUs (namely the x86/x64 family of CPUs) do emulate a CISC ISA on that is probably more RISC-like, but that's as far we can say they are trying to behave like a PDP-11 (even though the intention was to behave like a first-gen Intel Pentium).


> we shall not forget that all processors are C VMs

This idea is some 10yrs behind. And no, thinking that C is "closer to the processor" today is incorrect

It makes you think it is close which in some sense is even worse


> This idea is some 10yrs behind.

Akshually[1] ...

> And no, thinking that C is "closer to the processor" today is incorrect

THIS thinking is about 5 years out of date.

Sure, this thinking you exhibit gained prominence and got endlessly repeated by every critic of C who once spent a summer doing a C project in undergrad, but it's been more than 5 years that this opinion was essentially nullified by

    Okay, If C is "not close to the process", what's closer?
Assembler? After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on, that has a lower bound which none of the data gets close to?

You're repeating something that was fashionable years ago.

===========

[1] There's always one. Today, I am that one :-)


Standard C doesn't have inline assembly, even though many compilers provide it as an extension. Other languages do.

> After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on

The claim about C being "close to the machine" means different things to different people. Some people literally believe that C maps directly to the machine, when it does not. This is just a factual inaccuracy. For the people that believe that there's a spectrum, it's often implied that C is uniquely close to the machine in ways that other languages are not. The pushback here is that C is not uniquely so. "just as close, but not closer" is about that uniqueness statement, and it doesn't mean that the spectrum isn't there.


> Some people literally believe that C maps directly to the machine, when it does not.

Maybe they did, 5 years (or more) ago when that essay came out. it was wrong even then, but repeating it is even more wrong.

> This is just a factual inaccuracy.

No. It's what we call A Strawman Argument, because no one in this thread claimed that C was uniquely close to the hardware.

Jumping in to destroy the argument when no one is making it is almost textbook example of strawmanning.


Claiming that a processor is a "C VM" implies that it's specifically about C.

Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.

> Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.

Well, you're talking about languages that don't have standards, they have a reference implementation.

IOW, no language has standards for processor intrinsics; they all have implementations that support intrinsics.


> Okay, If C is "not close to the process", what's closer?

LLVM IR is closer. Still higher level than Assembly

The problem is thus:

char a,b,c; c = a+b;

Could not be more different between x86 and ARM


> LLVM IR is closer. Still higher level than Assembly

So your reasoning for repeating the once-fashionable statement is because "an intermediate representation that no human codes in is closer than the source code"?


To me a compiler's effort is misplaced and wasted when it's spent on checking invariants that could be checked by a linter or a sidecar analysis module.

Checking of whole-program invariants can be accurate and done basically for free if the language has suitable semantics.

For example, if a language has non-nullable types, then you get this information locally for free everywhere, even from 3rd party code. When the language doesn't track it, then you need a linter that can do symbolic execution, construct call graphs, data flows, find every possible assignment, and still end up with a lot of unknowns and waste your time on false positives and false negatives.

Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart. Recovering required information from arbitrary code may be literally impossible (Rice's theorem), and getting even approximate results quickly ends up requiring whole-program analysis and prohibitively expensive algorithms.

And it's not even an either-or choice. You can have robust checks for fundamental invariants built into the language/compiler, and still use additional linters for detecting less clear-cut issues.


> Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart

this assertion is known disproven. seL4 is a fully memory safe (and also has even more safety baked in) major systems programming behemoth that is written on c + annotations where the analysis is conducted in a sidecar.

to obtain extra safety (but still not as safe as seL4) in rust, you must add a sidecar in the form of MIRI. nobody proposes adding MIRI into rust.

now, it is true that sel4 is a pain in the ass to write,compile+check, but there is a lot of design space in the unexplored spectrum of rust, rust+miri, sel4.


If the compiler is not checking them then it can't assume them, and that reduces the opportunities for optimizations. If the checks don't run on the compiler then they're not running every time; if you do want them to run every time then they may as well live in the compiler instead.

It seems likely that C++ will end up in a similar place as COBOL or Fortran, but I don't see that as a good future for a language.

These languages are not among the top contenders for new projects. They're a legacy problem, and are kept alive only by a slowly shrinking number of projects. It may take a while to literally drop to zero, but it's a path of exponential decay towards extinction.

C++ has strong arguments for sticking around as a legacy language for several too-big-to-rewrite C++ projects, but it's becoming less and less attractive for starting new projects.

C++ needs a better selling point than being a language that some old projects are stuck with. Without growth from new projects, it's only a matter of time until it's going to be eclipsed by other languages and relegated to shrinking niches.


It will take generations to fully bootstrap compiler toolchains, language runtimes, and operating systems that depend on either C or C++.

Also depending on how AI assisted tooling evolves, I think it is not only C and C++ that will become a niche.

I already see this happening with the amount of low-code/no-code augmented with AI workflows, that are currently trending on SaaS products.


Apple got spooked by GPL v3 anti-tivoization clauses and stopped updating GNU tools in 2007.

macOS still has a bunch of GNU tools, but they appear to be incompatible with GNU tools used everywhere else, because they're so outdated.


And Apple is doing a lot of Tivoization these days. They're not yet actually stopping apps that they haven't "notarized" but they're not making it easier. One of the many reasons I left the Mac platform, both private and at work. The other reason was more and more reliance on the iCloud platform for new features (many of its services don't work on other OSes like Windows and Linux - I use all those too)

The problem with the old tools is that I don't have admin rights at work so it's not easy to install coreutils. Or even homebrew.

I can understand why they did it though. Too many tools these days advocate just piping some curl into a root shell which is pretty insane. Homebrew does this too.


Couldn't you simply use macOS without the iCloud features? Which features require iCloud to work?


You can but there's just not much point anymore.

I don't remember all the specifics but every time there was a new macos I could cross most of the new features off. Nope this one requires iCloud or an apple ID. Nope this one only works with other macs or iPhones. Stuff like that. The Mac didn't use to be a walled garden. You can still go outside of their ecosystem (unlike on iOS) but then there's not much point. You're putting a square peg in a round hole.

Now, Apple isn't the only one doing this. Microsoft is making it ever harder to use windows without a Microsoft account. That's why I'm gravitating more and more to foss OSes. But there are new problems now, like with Firefox on Linux I constantly get captcha'd. M365 (work) blocks random features or keeps signing me out. My bank complains my system is not 'trusted'. Euh what about trusting your actual customers instead of a mega corp? I don't want my data locked in or monitored by a commercial party.


The rexif crate supports editing, so you can apply rotation when resizing, and then remove the rotation tag from the EXIF data. Keeping EXIF isn't necessary for small thumbnails, but could be desirable for larger versions of the image.


Rust has a combo: people come for safety, stay for usability.

Languages struggle to win on usability alone, because outside of passion projects it's really hard to justify a rewrite of working software to get the same product, only with neater code.

But if the software also has a significant risk of being exploited, or is chronically unstable, or it's slow and making it multi-core risks making it unstable, then Rust has a stronger selling point. Management won't sign off a rewrite because sum types are so cool, but may sign off an investment into making their product faster and safer.


Generally speaking I was more prone to agreeing with Rust-haters, and thought the whole idea of how Rust lifetimes are implemented is flawed, and the borrow checker is needlessly restrictive. I also disagree with some other ideas like the overreliance on generics to do static dispatch leading to large executables and slow compiles.

To be clear I still think these criticisms are valid. However after using the language in production, I've come to realize these problems in practice are manageable. The langauge is nice, decently well supported, has a relatively rich ecosystem.

Every programming language/ecosystem is flawed in some way, and I think as an experienced dev you learn to deal with this.

It having an actual functioning npm-like package management and build system makes making multiplatform software trivial. Which is something about C++ that kills my desire to deal with that language on a voluntary basis.

The ecosystem is full of people who try to do their best and produce efficient code, and try to understand the underlying problem and the machine. It feels like it still has a culture of technical excellence, while most libraries seem to be also well organized and documented.

This is in contrast to JS people, who often try to throw together something as fast as possible, and then market the shit out of it to win internet points, or Java/C# people who overcomplicate and obfuscate code by sticking to these weird OOP design pattern principles where every solution needs to be smeared across 5 classes and design patterns.


Unicode wanted ability to losslessly roundtrip every other encoding, in order to be easy to partially adopt in a world where other encodings were still in use. It merged a bunch of different incomplete encodings that used competing approaches. That's why there are multiple ways of encoding the same characters, and there's no overall consistency to it. It's hard to say whether that was a mistake. This level of interoperability may have been necessary for Unicode to actually win, and not be another episode of https://xkcd.com/927


Why did Unicode want codepointwise round-tripping? One codepoint in a legacy encoding becoming two in Unicode doesn't seem like it should have been a problem. In other words, why include precomposed characters in Unicode?


It's quite the opposite. The number of high-quality maintained libraries is growing.

Rust keeps growing exponentially, but by Sturgeon's law for every one surviving library you're always going to have 9 crap projects that aren't going to make it. Unfortunately, crates.io sorts by keyword relevance, not by quality or freshness of the library, so whatever you search for, you're going to see 90% of crap.

There was always a chunk of libraries destined for the dustbin, but it wasn't obvious in the early days when all Rust libraries were new. But now Rust has survived long enough to outlive waves of early adopter libraries, and grow pile of obviously dead libraries. The ecosystem is so large that the old part is large too.

https://lib.rs/stats#crate-time-in-dev

Rust is now mainstream, so it's not a dozen libraries made by dedicated early adopters any more. People learning Rust publish their hello world toys, class assignments, their first voxel renderer they call a game engine. Startups drop dozens of libraries for their "ecosystem". Rust also lived through the peak of the cryptocurrency hype, so all these moon-going coins, smart contract VMs and NFTs exchanges now have a graveyard on crates.io.

When you run into dead libraries in Python or Java you don't think these languages are dying, it's just the particular libraries that didn't make it. JavaScript has fads and mass extinctions, and keeps going strong. Rust is old enough that it too has dead libraries, and big enough that it has both a large collection of decade-old libraries, as well as fads and fashions that come and go.


Google Pixel Buds have a translation feature, and a bunch of other "Gemini AI" gimmicks, available in the EU.

Apple managed to get approvals for medical devices and studies (highly regulated everywhere), custom radios and satellite communication (highly regulated everywhere).

Apple already has machine translation, voice recognition, voice recording, and dictation features shipped in the EU.

But when EU hurt Apple's ego by daring to demand to give users freedom to run software they want on devices they bought (that could break them out of a very lucrative duopoly), Apple suddenly is a helpless baby who cannot find a way to make a new UI available in the EU.


The EU has not declared that Android gatekeeps headphone technology, so the comparison to Pixel Buds is totally irrelevant. There is no interop requirement placed on them.


So all Apple needs to do is stop gatekeeping headphone technology.


What is the definition of gatekeeping technology?


In Apple’s case it mostly means that the API:s Apple use for the AirPods use should be available for others to also use. Apple is not allowed to deny or punish headphones from other manufacturers that want to use those API:s.


This is not quite the problem.

There are multiple issues at play.

The two main issues are:

1 - Sometimes processing is done in private cloud servers for complex translations. Apple is not allowed to do that for EU users. Full Stop. Even if it were not prohibited by EU law, you still have issue 2.

2 - It's unclear whether or not Apple can charge. If another dev uses the APIs, and it triggers a call to the cloud, who pays for the inference? Until Apple gains clarity on that, charging could be considered "punish"-ing a dev for using their APIs.

My own opinion? Issue 2 they will get worked out, but it won't matter because I don't think the EU will move at all on issue 1. I think they see data privacy as serving a twofold purpose. One, protecting their citizens from US surveillance. ie-National Security. And two, part of their long term strategy to decrease the influence of US tech firms in the EU. Both of which I think European policymakers and European common people feel are critical to Europe.


On-device API use is what is relevant here, services such as servers and interference services are out of scope. The DMA clearly allows companies to charge for service use, but they cannot deny API use for any competition who wants for example to use the quick pairing feature or low-latency communication.


They can design the API in such a way that you can provide your own interference solution, or just disable the cloud interference. This is purely a business decision.


It’s some combination of a market companies in the EU care about where Apple sells a product that has some amount of market share, where the threshold and market definition are totally made up and seem to only impact foreign multinationals.


If it's any consolation, Apple is in a league of their own. Any fair, proportional legislation would impact them more than anyone else.


So Apple is welcome to divest AirPods into a separate company and problem solved. Who knows, "AirPods Inc" may discover there are a great many phone brands out there that could use a nice integration and extra features. Win for consumers.


I agree, the Beats takeover should have never happened. The US is basically allowing everything to be swallowed by big-tech.


> the Beats takeover should have never happened.

I agree from a business perspective, those headphones were all brand and a bad fit for apple from a quality perspective. Do we really need regulators deciding when businesses are wasting their money?


The integration works so well because airpods and apple phones use a protocol that isn't bluetooth, their "Magic protocol". You have to own the whole stack to make it work so well.


I wish this would happen. I loved my AirPods Pros 2 but the Android/Windows support was so abysmal that I eventually ditched them.


That's literally the Boeing model


So, the problem is indeed the EU.


On the contrary, the EU is the solution.


Or you know Linux/Windows PCs that would definitely be a big win.


So many other features, such as iPhone mirroring also don't work. Quite ridiculous.


Vote with your wallet?


Easier said than done, but this relatively small annoyance is not a dealbreaker to change the phone. I had it before this feature was announced :D


Half of the new Google Pixel AI features are not enabled in EU. Magic cue, text image editing... These are on-device features too, so really not sure why

I'm a disappointed Pixel 10 owner living in Germany


Instead of these conspiracy theories, the more likely answer is that it takes time to get through these additional regulations, and they didn't want that to hold back their US rollout. Its a pattern that we've seen plenty of times already in the tech industry.


So Samsung could nail the regulation part with their earbuds, but Apple with Airpods can't.


Samsung isn’t a “gatekeeper” under the DMA. The regulations in question here don’t apply to them.


Most probably as you say they can't ship the capability yet, so they're blaming the regulations.

Or really the headphones actively register and send data outside of the EU. There's been some pushback recently on this front (ie. recent MSFT case [1]) since it's a known fact in the field that the approved 2023 EU-US DPF is basically BS, as it doesn't really address the core issues for which US companies were deamed not-compatible with GDPR.

[1] https://www.senat.fr/compte-rendu-commissions/20250609/ce_co...


In my quest to check if deamed was the correct spelling, I stumbled upon an interesting read https://reginajeffers.blog/2024/03/04/damned-or-deemed-or-de...


Indeed. But in my case it’s quite easy as it was a typo. Deemed is the correct one


EU isn't forcing Google to let random 3rd parties replace Gemini AI with TotallyHonestAndNotStealingYourData Corp's AI.


You already can replace the default assistant app with any app that declares itself as an assistant on Android, and have always been able to.


Google is a designated gatekeeper carrying all the same DMA obligations for:

Google Search, YouTube, Chrome, Shopping, Maps, Ads, the Play Store, and Android.

https://digital-markets-act.ec.europa.eu/gatekeepers_en


You can replace Gemini with Perplexity or whatever you want on your Android phone, that includes the main system-wide assistant.


It's almost like you can only shakedown your victims so many times before they say no mas.


Did you just describe Apple as a victim?


GDPR is about collection and processing of personally identifiable information. These are specific legal terms that depend on the context in which the data is collected and used, not just broadly any data anywhere that might have something to do with a person.

GDPR is aimed at companies building user databases, not allowing them to completely ignore security, accuracy, user complaints, and sell anything to anybody while lying about it. It doesn't limit individual people's personal use of data.


GDPR doesn’t mention “personally identifiable information” once; it’s concerned with personal data, which is “any information relating to an identified or identifiable natural person (‘data subject’)”.

The rest is correct: the restrictions are aimed at organisations, not individuals.

[1] https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng#art_4.tit_...


The restrictions are not aimed at organisations, but to protect individuals.

https://www.gov.uk/government/publications/domestic-cctv-usi...

"If your CCTV system captures images of people outside the boundary of your private domestic property – for example, from neighbours’ homes or gardens, shared spaces, or from public areas – then the GDPR and the DPA will apply to you. You will need to ensure your use of CCTV complies with these laws. If you do not comply with your data protection obligations you may be subject to appropriate regulatory action by the ICO, as well as potential legal action by affected individuals."

You, as an individual, have data protection obligations, if your ring doorbell captures audio/video about someone outside your property boundaries. The apple translation service seems analogous.


The ICO is pretty zealous though in this regard. To quote recital 18:[1]

This Regulation does not apply to the processing of personal data by a natural person in the course of a purely personal or household activity and thus with no connection to a professional or commercial activity.

[1] https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng#rct_18


It's likely taken the view that "purely personal or household activity" only covers the recording of audio/video in a domestic setting.


GDPR does covers individual's use of eg Ring doorbells insofar as recording video and audio outside of your own property. This would seem to be analogous.

GDPR is aimed at protecting _individual's_ personal information, irrespective of what or who is collecting or processing it.


It applies to Ring and not other doorbell cameras, because Amazon is collecting and selling access Ring video feeds.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: