Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

Instagram is also full of these scam ads, and they remove maybe 1 in 20 I report. Many of them are obviously fake or hacked accounts, e.g. having "Nail Salon" as the business type on their profile, but their posts are promising $$$ if you have an account on Binance.


Apple + Google form a duopoly. Apple has locked down iOS to let them do whatever they want and overcharge as much as they like, and Google has no incentive to be any better, because there's no serious 3rd contender*.

For typical users not buying Apple means having to compromise on privacy with Google, which isn't a great option either. Both are trying their best to create vendor lock-in and make it hard for users to leave.

From developer perspective not having access to iOS users is a major problem. Apple inserts themselves between users and developers, even where neither users nor developers want it. Users and devs have no bargaining power there, because Play Store does the same, and boycotting both stores has a bunch of downsides for users and devs.

*) I expect people on HN to say that some AOSP fork with f-droid is perfectly fine, but that's not mainstream enough to make Apple and Google worried, especially that Google has created its certification program and proprietary PlayStore Services to make degoogling phones difficult.


It's an interesting perspective, but as I understand this case, the case is not interested in a developer's bargaining power against their distributor. The case is interested in the impact on consumers (fewer choices, higher prices). There's certainly no argument to make that consumers lack a variety of apps and app features.

I care about you as a developer, but I'm not sure this case does. Maybe I'm thinking about it wrong.


I think these are two sides of the same coin, because ultimately developers must pass the extra costs to users. The devs aren't subsidizing the 30%/15% cut, it's a tax that users pay.

App Store rules and the greedy cut also make certain kinds of apps and lower-margin businesses impossible to create in such environment, so this blocks innovations that could have benefited users.

When Apple bans competitors, blocks interoperability, drags their feet on open standards, and gives their own apps special treatment nobody else can have, then users miss out on potentially better or cheaper alternatives. This helps Apple keep users locked in, not innovate when they don't want to, and overcharge for services users can't replace.

All of that was more forgivable when smartphones were just a novelty, and digital goods were just iTunes songs. But now a lot of services have moved online. Mobile phones have become a bigger platform than desktop computers, and for billions of people they are their primary or the only computing device.


> *) I expect people on HN to say that some AOSP fork with f-droid is perfectly fine

As you explained yourself, it's not a real alternative, because it relies on Google itself, who can always decide to break it. A real alternative is GNU/Linux phones, Librem 5 and Pinephone. And yes, they are very niche and not easy to make.


Google has a huge incentive to compete because Apple has 50% of the US market, which is the most consequential market in the world


rustup makes it easy to have multiple Rust versions installed side by side, so you can test compatibility with older versions, or try beta or nightly builds (e.g. `cargo +1.77.0 test`). I think that functionality is rustup-specific and not easily replicable with tarballs or apt.

Apart from that rustup makes it easy to add and remove optional components, libraries for cross-compilation, and update Rust itself. These things could in theory be done with apt if Debian repackaged all of these components, but they didn't. Having one way to manage Rust installation that works the same on all platforms makes it easier to teach Rust and provide install instructions for software.


This whole legislation probably wouldn't exist if Apple wasn't so greedy and manipulative for all these years. Apple has been bundling real security with brazen anti-competive behaviors, pretending they're all or nothing.

If you don't want to be overcharged 10x for payments, the only option is fraud and malware (as opposed to processing payments at a market rate, or allowing vetted payment providers, having APIs for external subscriptions integrating with iOS's subscription management, etc.).

Sandboxing and app reviews for malware are great, but Apple bundled them with rejecting apps for competing with Apple ("duplicating" their functionality), mentioning competitors' names, lower prices elsewhere, or showing nipples anywhere (nipples can't hack your iPhone). They've been intentionally blurring the line between what they select as appropriate for their boutique child-friendly store, and what users have right to do with devices they thought they own.


Looks like the author is very focused on maximally optimizing some very compute-heavy signal-processing code, and doesn’t care much what is used outside of the compute kernel.

So it may be very true that for their work it’s all the same whether they use C++, or Python with numeric libraries, or some niche DSL, or a macroassembler.

But use of C++ is much broader than pure compute, so this perspective does not generalize to C++ or Rust as a whole.


But that is the thing, actually.

Little by little, domains where either C or C++ would be the option to go during the 1990's, with exception of areas where VB and Object Pascal (Mac/PC variants) were enjoying adoption, have been slowly taken away from them.

Bjarne created C with Classes for his distributed systems research at Bell Labs, nowadays the majority of CNCF project landscape is written into something else.

C++ GUI frameworks were all over the place during the 1990's, nowadays one has to go to third parties, as platform vendors rather focus on managed languages. Even C++/WinRT going into maintenaince is an acknowledgemnt from Microsoft that outside Redmond XAML / C++ was a failure in regards to adoption.

Long term we might be left with LLVM, GCC, Unreal, Skia, CUDA, and a couple of other key projects that keep C++ relevant and that is about it.


Yeah if you define the utility of C++ so narrowly to these very highly micro-optimized use cases then Rust can easily capture 100x the mind share of C++ because programmers generally just don't need to care that much. And really seeing what advanced Rust programmers can do when armed with godbolt I'm not convinced even those use cases demand using C++.


#2 is better than just being screwed for nothing, but a discount for giving up privacy is equivalent to others paying extra to keep their privacy. To me privacy should be a right, and not a premium tier.


What if I'm willing to forgo my privacy in exchange for money? There's no real difference between saving $10 and getting paid $10, so "privacy should be a right, and not a premium tier" is fundamentally incompatible with "people should be allowed to voluntarily sell their data".


The law also prevents you from giving up your freedom for money; a contract like "I accept to be the indentured servant of $CORPORATION in exchange for $50k" is void. Similarly it should prevent you from giving up your privacy like that.


This is big, because Rust and Debian have a philosophical disagreement how software should be updated, and so far this has left Debian users with a Rust package that is not supported by a significant portion of the Rust ecosystem.

Rust has an "evergreen" approach to updates, like Chrome, focusing on making updates so easy and backwards-compatible that there's nothing stopping users from always using the latest version. Debian prefers quite the opposite, and they'd rather keep outdated software with known bugs than to risk updating and bringing new bugs.

Having rustup-init is a compromise, allowing Debian users to have an up-to-date non-Debian Rust version, without `curl | sh`.


Rusts extremely heavy bias towards statically compiled binaries is also a notable major major major cross-roads here. The rust/ tree in Debian is big and heavy, tons of stuff. But unlike the entire rest of Debian it's entirely & purely -dev packages. Because Rust truly has zero interest in participating in conventional well managed shared library systems. I-caches & broad system updating be dammed: recompile & reduplicate the world, I guess, if you believe rust.

Very very very interested to see what happens with this fanaticism as it faces it's new challenge, WASI. I can think of no language less competent & less disposed to WASI librarification than Rust.


I don't think rust is 'fanatic' about it, there's no cultural or organisational push of 'thou shalt static link'. It's simply not a big priority to stabilise an ABI, in large part because it's quite hard to do right, especially with the other things rust does prioritise. And the rest of the world doesn't seem to think it's a huge deal either. Distro maintainers are more or less the only people I've seen who really care about it.


> I don't think rust is 'fanatic' about it, there's no cultural or organisational push of 'thou shalt static link'.

Rust users have a "fanatic" ideology about static linking is a stretch. But rust is indeed now firmly wedded to recompiling everything. This is true despite the fact that ever growing compile times are a major complaint about the language.

What Rust users are fanatic about is monomorphisation. That boils down to the compiler implementing generic types using C++ style templates instead of C++ style vtables. Monomorphisation means the compiler produces a custom version of most libraries for your application or more precisely, the types your application uses. It probably contributes it's speed. But since they are customised to your application there is no point sharing them, so shared libraries are kinda pointless.

Like C++ Rust supports both the vtable a template style, but assumption the compiler knows the Size of every object is baked in pretty deep. Parameters have the Sized constraint by default for example, and a standard trick to get around the borrow checker is to copy everything, which you can only do if you know its size. I don't see it changing now - the language will life or die by the choice.


This, pretty much. You can code a shared object/dynamically linked library in Rust, and it will work as a drop-in replacement for one that's coded in C. Rust also has extensive support for interfacing with C shared objects.


The shared vs static libs divide has been ongoing since libraries were a thing. Not an OS maintainer/packager, but from the outside, it really seems most of Debian's headaches are because they want to maximize compatibility owing to shared libraries.

Which is an entirely valid approach, but it seems like things would be a lot easier if programs were all statically compiled.


Rust is not the single language implementation that wants to avoid the "conventional" shared library systems. Python did so, Node did so and probably many other did or plan to do so. The conventional systems are most suitable for end-user applications, while they are annoying at the best for many other use cases.


> Python did so

Nah, python is one of the few languages with an ecosystem that can still be properly packaged, even after pep517. All libraries are scripts that are installed (and bytecode-compiled) globally.

Node is a pain in the ass due to the granularity of libraries, how frequently they update and how developers expect a specific pinned version (and vendors are flaky about backwards-compat), but nothing stopping it from having the same treatment as python.

Dynamic linking isn't really an end-user feature, but rather a system integrator's. It makes it easier to make sure everything behaves the same, and can share the same resources, which is why Apple particularly wanted an ABI for Swift[1]

[1]: https://faultlore.com/blah/swift-abi/ - How Swift Achieved Dynamic Linking Where Rust Couldn't


> Nah, python is one of the few languages with an ecosystem that can still be properly packaged, even after pep517. All libraries are scripts that are installed (and bytecode-compiled) globally.

Virtually every Python programmer now uses some sort of virtual environments, because a single Python environment cannot have multiple versions of the same package installed, and those "proper" packages are pain in the ass for that reason. If I install a `python3-pil` package in Ubuntu jammy today, it will be 9.0.1 instead of 10.2.0 and I cannot install 10.2.0 into the global environment without risking some breakage. Yes, there would be some backport packages and then PPAs, but they will again be non-cooperative in the same way and I just want to use `pip`.

What if each leaf package making use of `python3-pil` had their own virtual environment instead? That would solve most problems, but doesn't that sound like rustup? In fact, `dist-packages` was born exactly due to the stubbornness of those distro packages...

> Dynamic linking isn't really an end-user feature, but rather a system integrator's.

Any system integrator that strictly insists on dynamic linking is missing the whole point of building systems in the first place.


My point was that it does both. It didn't ditch dynamic system libraries, and it's better off for it than the rest of 'em.

The fun thing about the python ecosystem, is that even if you write an application for the latest version of Pillow, its API is stable, and you're unlikely to actually use anything introduced only in the newest version, so switching between versions is largely free (though you should definitely specify a minimum version reflecting the functionalities you actually use). In the worst case, the distro will eventually catch up with the library version your software needs, and then it can be packaged. Unlike with the node ecosystem, in python land code can easily survive years without upstream maintenance!

> Any system integrator that strictly insists on dynamic linking is missing the whole point of building systems in the first place.

You can't really say things like this without elaborating.


A couple of weeks ago I was curious how many binaries on my system did not link glibc. On my pretty standard Unbuntu it were exactly ten and they were 100% Go binaries.


That's because shared libraries have an extremely heavy bias towards C. They're ill-suited for languages that have generics, and use monomorphisation or rely on inlining. The standard .so format and dynamic linker don't support features that can't be dumbed down to look like C.


You are making an assumption that shared libraries have to somehow be compatible with the C ones and use the C dynamic linker: you can 100% have your own dynamic linker and have your little ecosystem work the way you want it to work for your language, and you also (to be clear) absolutely can do this without having to be Apple and controlling the "system" dynamic linker (which again is itself just a convention really) to add features as they do with Swift/Objective-C. (Hell: even just having an alternative C standard library on your system demonstrates a separate linker ecosystem as the C dynamic linker is effectively shipped as part of the C standard library.) The only thing that matters here is not the disk format or the mechanism but how it would be packaged and the tradeoffs on when time has to be put into baking them.


> "But unlike the entire rest of Debian it's entirely & purely -dev packages."

I've heard that before and wondered what it means, so maybe someone can explain. My primitive understanding is that the regular package has the binary (executable or library) plus any additional files like man pages. The -dev package is the header files and the source package is the Debian patched source that can build the regular package. How is it different for Rust?

BTW Debian has a nice list of what is available from the Rust world:

https://qa.debian.org/developer.php?email=pkg-rust-maintaine...


Semantic Web is dead, because we've suddenly achieved its goal without it.

We no longer need everyone to publish their information in a machine-readable form, because through LLMs we can make machines read the human-readable versions now.

LLMs are not quite reliable, but the machine-readable data was never reliable either. The Web as a whole is messy, and invisible metadata is systematically less well maintained than the primary human-centric information.


> The Web as a whole is messy, and invisible metadata is systematically less well maintained than the primary human-centric information.

besides web, there are tons of data providers and their clients, who are interested in well maintained quality of data..


> besides web

That's Semantic Besides, not Semantic Web then.


Some industries adapted semantic web stuff. Others could benefit from it(have common ontologies for data integration).


Works fine in Firefox


It is true for most web apps that are just moving data between a database and HTML.

Rust has features for maximizing performance and optimizing memory usage, but for a lot web apps JS or Golang can be efficient enough.

Rust has features for safely working with large projects and complex multi-threaded code, but for web apps simple request-response processing may be good enough.

On the front-end only JavaScript can touch DOM or communicate with the rest of the world, so all your Rust will only be delegating work to JS. That's a pure overhead, unless you have a lot of other computation to do.

Rust is still awesome for lower-level networking infrastructure (servers, proxies, etc.) and tooling (e.g. log parsing and stats, compression, image processing).

Rust can be beneficial for web services that are compute heavy or process large amounts of data (e.g. working with map data) and tasks that need speed and low latency (e.g. video conferencing back-ends).


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: