Hacker Newsnew | past | comments | ask | show | jobs | submit | foooorsyth's commentslogin

I’ve said several times before that notifications should be reportable as spam directly to Google/Apple, just like email spam reporting.

Google tried to tackle this with notification channels, but the onus falls on the developer to actually use them honestly. No company trying to draw attention back to their app with advertisement notifications will willingly name a notification channel “advertisements” or “user re-engagement” or similar — they’ll just interleave spam with all the non-spam. This API from G hasn’t worked.


There should be a public API, open to any user-designated program (including self-made, without requiring any special hoops to obtain any fancy entitlements), that can act as a "firewall" for all notifications (except, possibly, for few system-critical ones), allowing it to control and modify those as it seems fit.


Applications can interact with notifications on the user's behalf via the accessibility permission - I do this with KDE Connect. I don't know what the limitations are.


On iOS?

Last time I've checked, kdeconnect-ios was unable to read any third-party notifications, not to mention doing anything to them or modifying their text or appearance in any way.

Project readme still says "Notification syncing doesn't work because iOS applications can't access notifications of other apps" (https://github.com/KDE/kdeconnect-ios?tab=readme-ov-file#kno...) so I think it's still a thing.


On Android, I forgot to mention.


Sounds great! Until your grandpa downloads a notification filter than really just forwards all his notifications to the bad guys so they can hack all his accounts


That can already happen because apps can get the permission to read your notifications.


Precisely this. There needs to be an API that all apps have to use not only for notifications but also for getting your contacts, your phone's location, etc. that is spoofable by the user. Or better yet, an AI program that runs entirely on the phone and does the spoofing automatically and entirely on behalf of the user.

Let the enshittified apps' ads interact with your AI agent and steal your fake "data" in the background without bothering the user.

Also important: It must be IMPOSSIBLE for any app to detect that its requests are being intercepted by your agent. (If they can tell, they'll refuse to work until you give them direct access.)

This is a real killer app for AI but you'll never get VC funding to build it.


On Android such a spoof app existed, it can hook into seemingly any API call and return things you control: https://www.youtube.com/watch?v=_dt50HWys1k&t=27s

But of course you need a rooted phone, and rooted phones can't run banking apps, tap-to-pay, Netflix, Pokemon Go, blah blah..

The notification "firewall" is probably not impossible to make. I use Pushbullet, it mirrors notifications to my computer (to the browser extension to be exact), and I can already dismiss notifications coming into my phone from the computer. It should be possible to make an app that intercepts all notifications, analyzes their contents and dismiss them if they're spam...


> Google tried to tackle this with notification channels, but the onus falls on the developer to actually use them honestly. No company trying to draw attention back to their app with advertisement notifications will willingly name a notification channel “advertisements” or “user re-engagement” or similar — they’ll just interleave spam with all the non-spam. This API from G hasn’t worked.

Revolut are really annoying for this. I'm sure there's a few spare days In their development cycle for someone to implement it if they wanted to, but instead they keep everything on the same channel which is 50% promo shit, because you don't want to miss that notification warning you about fraudulent activity on your card.


We also need some kind of (privacy friendly) open rate tracking and spam protection.

If many users receive a new kind of notification, using a new template, with low open rates, and uncorrelated with app activity, somebody at Apple should at least give it a 5-second glance and decide between "false positive" and "needs to be elevated"


Rockets?

Seoul is in artillery range of the border.


git by itself is often unsuitable for XL codebases. Facebook, Google, and many other companies / projects had to augment git to make it suitable or go with a custom solution.

AOSP with 50M LoC uses a manifest-based, depth=1 tool called repo to glue together a repository of repositories. If you’re thinking “why not just use git submodules?”, it’s because git submodules has a rough UX and would require so much wrangling that a custom tool is more favorable.

Meta uses a custom VCS. They recently released sapling: https://sapling-scm.com/docs/introduction/

In general, the philosophy of distributed VCS being better than centralized is actually quite questionable. I want to know what my coworkers are up to and what they’re working on to avoid merge conflicts. DVCS without constant out-of-VCS synchronization causes more merge hell. Git’s default packfile settings are nightmarish — most checkouts should be depth==1, and they should be dynamic only when that file is accessed locally. Deeper integrations of VCS with build systems and file systems can make things even better. I think there’s still tons of room for innovation in the VCS space. The domain naturally opposes change because people don’t want to break their core workflows.


It's interesting to point out that almost all of Microsoft's "augmentations" to git have been open source and many of them have made it into git upstream already and come "ready to configure" in git today ("conical" sparse checkouts, a lot of steady improvements to sparse checkouts, git commit-graph, subtle and not-so-subtle packfile improvements, reflog improvements, more). A lot of it is opt-in stuff because of backwards compatibility or extra overhead that small/medium-sized repos won't need, but so much of it is there to be used by anyone, not just the big corporations.

I think it is neat that at least one company with mega-repos is trying to lift all boats, not just their own.


Meta and Google both have been using mercurial and they have also been contributing back to upstream mercurial.


git submodules have a bad ux but it's certainly not worse than Android's custom tooling. I understand why they did it but in retrospect that seems like an obvious mistake to me.


>The biggest problem with Nix is its commit-based package versioning.

I am naive about Nix, but...

...isn't that like...the whole selling point of Nix? That it's specific about what you're getting, instead of allowing ticking time bombs like python:latest or npm-style glibc:^4.4.4

Odd to attach yourself to Nix then blog against its USP.


> > The biggest problem with Nix is its commit-based package versioning. > ...isn't that like...the whole selling point of Nix?

Not quite.

That sentence is definitely the most ... discussion-worthy comment in the blog.

To my understanding, OP wants to write a tool to make it easy for use cases like "use ruby 3.1 and gcc 12 and ...".

The main Nix repository is nixpkgs. Nix packages are source-based, so the build steps are declared for each version. To save maintenance effort, nixpkgs typically only maintains one version of each program.

I read OP's "commit-based package version" phrase to mean "if you want ruby 3.1, you need to find the latest commit in nixpkgs which used ruby 3.1, and use that nixpkgs revision". -- Although, worth noting, this isn't the only way to do it with Nix.

Though, regarding 'commit-based versioning' as Nix's USP? I'd say that's also a reasonable description, yes. (By pinning a particular revision of Nix, the versions you use will be consistent).


Eh. I've been using nixOS for years now and still find that I often desperately, desperately wish I could upgrade just one program that I need a new version of without risking that, you know, any individial single one of my installed packages has a change between the last update and now that messes up my workflow. Or that I could pin a version of a software that I'm happier with that version of without essentially rolling my own package repo. It is, in fact, the only package manager I'm aware of that makes it such a pain to do that. It's only because people in this thread are insisting it's doable that I say 'such a pain' instead of just 'impossible'.

A few weeks ago I needed to update firefox for a bug fix that was causing a crash, but of course that meant updating all of nixpkgs. When I finished the switch, the new version of pipewire was broken in some subtle way and I had to roll it back and have been dealing with firefox crashing once a week instead. I can't imagine pitching this to my team for development when I'm having this kind of avoidable issue just with regular packages that aren't even language dependencies.

To those who say 'if you want to lock your dependencies for a project, you can just build a nix flake from a locked file using the <rust | python | npm> tools' I say, why the hell would I want to do that? Being able to manage multiple ecosystems from the same configuration tool was half the draw of nix in the first place!


Creating overlays on nixpkgs is fairly trivial. There was a bug a couple of weeks ago with yt-dlp, which was fixed in the nightly version. But even today the new version is not yet available in nixpkgs-unstable. Did I wait all this time with a broken yt-dlp? No! I created a derivation that overrides the git commit of the original yt-dlp, and added it as an overlay to nixpkgs. Once nixpkgs has the newer version, I'll remove the overlay and customized derivation. This is a 5-lines-of-code change. You don't have to use flakes if you don't want to, it works without flakes as well.

Now compare the above with how you would customize a version in other systems, like Debian with apt-pkgs ...


Um? That's trivial with flakes (and I think it was doable without flakes, but I don't really remember/care). For one-offs (I'd probably do this for your firefox example but YMMV), just tell it the version to run:

  $ nix run nixpkgs#firefox -- --version
  Mozilla Firefox 138.0.1
  $ nix run github:nixos/nixpkgs/nixos-unstable#firefox -- --version
  Mozilla Firefox 139.0.1
  $ nix run github:nixos/nixpkgs/b98a4e1746acceb92c509bc496ef3d0e5ad8d4aa#firefox -- --version
  Mozilla Firefox 122.0.1
Or, if you want to actually incorporate it into your system, tell the system flake to pull whatever versions you want:

  {
    inputs = {
        nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.11";
        nixpkgs-unstable.url = "github:nixos/nixpkgs/nixos-unstable";
        nixpkgs-b98a.url = "github:nixos/nixpkgs/b98a4e1746acceb92c509bc496ef3d0e5ad8d4aa";
    };
    outputs = { self, nixpkgs, nixpkgs-unstable, nixpkgs-b98a }: {
        nixosConfigurations.yourmachinename = nixpkgs.lib.nixosSystem {
            system = "x86_64-linux";
            specialArgs = {
                nixpkgs-unstable = import nixpkgs-unstable {
                    system = "x86_64-linux";
                };
                nixpkgs-b98a = import nixpkgs-b98a {
                    system = "x86_64-linux";
                };

            };
    ---snip---
and then when you pull packages say which one you want:

    packages = with pkgs; [
      dillo  # from stable nixpkgs
      nixpkgs-unstable.firefox  # from unstable
      nixpkgs-b98a.whatever  # from some exact commit of nixpkgs
    ]

I assume you could do the same thing for project-level flakes, but TBH I don't usually do that so I don't have the code to hand. (In contrast with grabbing system packages from whatever version of nixpkgs I want, which I know works because I pulled the example code from the config on the machine I'm typing this comment on.)


Bingo. Nix doesn't give you a generalizable-across-languages-and-ecosystems way of specifying specific versions without blowing up your package size, unless you hand Nix to your users (which we didn't want to do)

Maybe we were holding it wrong, but, we ultimately made the call to move away for that reason (and more)


Does Lottie do no temporal compression? Is it just a sequence of p-frames with a json manifest?


It's really impressive. Technical content, GitHub repos that go along with the videos, set design, retro editing -- much higher quality than a lot of stuff out there from major studios


Unfortunately the cheats are way ahead of this. Most modern aimbots in shooters like Counter-Strike are (intentionally) not-obvious. They give minor advantages and do tiny corrections for an already-immensely-skilled player to gain a small edge. In a game where the difference between a great player and an elite player is small, they can be the invisible difference maker.


>I think many would quickly understand the value proposition

I think thousands of innocent teenagers without credit cards will be furious. Not to mention anyone that takes a game semi-seriously and cares about their reputation after getting banned. Also, with real-dollar values tied to skins, you’re not just nuking someone’s $50 account — accounts and their associated items can be worth a lot of money.

Anti-cheats should need to be certain. They should also, however, ban the hardware ID, which lots of games companies choose not to do (because they’d lose money).


So because China will steal it eventually, we should just give it away now? That’s your argument?

>clearly China is capable of catching up with ASML’s tooling

The only thing clear to me is precisely the opposite. Nobody has been able to catch up with ASML, including China. If China is capable of catching up on their own (without espionage), why would Taiwan even matter? Why would export controls on ASML tooling even matter?

They matter because ASML and TSMC are companies built on secret know-how that others can’t replicate. Do we really need to explain on HN that companies are built on secrets?


> why would Taiwan even matter?

The CCP has fully subscribed to irredentism and it has popular support in the mainland. Taiwan will never not matter.

But otherwise I agree with the rest of your argument.


> CCP has fully subscribed to irredentism and it has popular support in the mainland

Plenty of countries, particularly those in an economic slump, have popular support for stupid wars. That changes quickly when the war is started and the costs come home.


This doesn’t stop them from starting stupid wars for stupid reasons. Losing a war is not even a guarantee that they will waive their future territorial ambitions or concessions just as two examples: Spain and Gibraltar or Argentina and the Falklands.


> Argentina and the Falklands

This is the example Xi, and those around him, would be looking to.


Yeah. Irredentism is a fundamentally emotional ideology borne from nationalism. It doesn’t have to make sense, it just has to be a rallying cry.


> It doesn’t have to make sense, it just has to be a rallying cry.

Correct. It's for domestic consumption. By the time leadership is weak enough to be compelled into playing it out, chances are it won't make military sense.


The hope is that it not making military sense prevents military action. We don’t have any such promise from reality, or much historic precedent to depend on, and in the case of the PRC and Taiwan, it is CCP leadership which is angling for a takeover of the independent nation of Taiwan and the eradication of the Republic of China.


Not everyone reading your code will be using an IDE. People may be passively searching your code on GitHub/gerrit/codesearch.

val/var/let/auto declarations destroy the locality of understanding of a variable declaration without an IDE + a required jump-to-definition of a naive code reader. Also, a corollary of this problem also exists: if you don’t have an explicit type hint in a variable declaration, even readers that are using an IDE have to do TWO jump-to-definition actions to read the source of the variable type.

eg.

val foo = generateFoo()

Where generateFoo() has the signature fun generateFoo(): Foo

With the above code one would have to jump to definition on generateFoo, then jump to definition on Foo to understand what Foo is. In a language that requires the explicit type hint at declaration, this is only one step.

There’s a tradeoff here between pleasantries while writing the code vs less immediate local understanding of future readers / maintainers. It really bothers me when a ktlint plugin actually fails a compilation because a code author threw in an “unnecessary” type hint for clarity.

Related (but not directly addressing auto declarations): “Greppability is an underrated code metric”: https://morizbuesing.com/blog/greppability-code-metric/


If you accept f(g()), you've already accepted that the type of every expression is not written down.


I don’t particularly accept f(g()). I like languages that require argument labels (obj-c, swift). I would welcome a language that required them for return values as well. I’d even enjoy a compiler that injected omitted ones on each build, so you can opt to type quickly while leaning on the compiler for clarity beyond build time.


Argument labels are equivalent to variable names. You still have them with auto. In either case you don't see the actual type.


I do not agree that using an IDE matters.

If you cannot recognize the type of an expression that is assigned to a variable, you do not understand the program you are reading, so you must search its symbols anyway.

Writing redundantly the type when declaring the variable is of no help when you do not know whether the left hand side expression has the same type.

When reading any code base with which you are not familiar, you must not use a bad text editor, but either a good text editor designed for programmers or any other tool that allows fast searching for the definitions of any symbols encountered in the source text.

Adding useless redundancy to the source text only bloats it, making reading more difficult, not easier.

I never use an IDE, but I always use good programming language aware text editors.


The argument is tautological.

I want to use a text editor => This is the wrong tool => Yes, but I want to use a text editor.

These people do use the wrong tooling. The only way to cure this grievance is to use proper tooling.

The github webui has some ide features, such as symbol search. I don't see any reason why not use a proper ide. github.dev is a simple click in the ui away. When you use gerrit, do a local checkout, that's one git command.

If you refuse to use the correct tools for the job, your experience is degraded. I don't see a reason to consider this case when writing code.


Have you ever worked in a large organization with many environments? You may find yourself with a particular interface that you don’t know how to use. You search the central code search tool for usages. Some other team IS using the API, but in a completely different environment and programming language, and they require special hardware in their test loop, and they’re located in Shanghai. It will take you weeks to months to replicate their setup. But your goal is to just understand how to use your version of the same API. This is incredibly common in big companies. If you’re in a small org with limited environments it’s less of an issue.


I have worked in big environments. My idea about "big" might be naive, environments spanning different Oses and different, including old languages like fortran and pascal. But I never been in a situation where I couldn't check out said code, and open it in my ide and build it. If you can't that sounds like a another case of deficient tooling. Justifying deficient tooling.

These where not some SWE wonderlands either. The code was truly awful at times.

The Joel test is 25 years old. It's a industry standard. I, and many other people consider it a minimum requirement for software engineering. If code the "2. Can you make a build in one step?" requirement i should be ide-browsable in one step.

If it takes weeks to replicate a setup the whole environment is deeply flawed. The one-step build is the second point on the list because Joel considered it the second most important thing, out of 12.


My situation: hardware company, over 100 years old. I’ve found useful usage examples of pieces of software I need to use, but only on an OS we no longer ship, from a supplier we no longer have a relationship with, that runs on hardware that we no longer have. The people that know how to get the dev environment up are retired.

In those cases, I’m grateful for mildly less concise languages that are more explicit at call and declaration sites.


If you are unable to find the type of a right-hand-side expression that appears in an assignment or initialization, then the environment does not allow you to work and it must be changed.

The redundant writing of the type on the left-hand side does not help you, because without knowing the type of the right-hand side you cannot recognize a bug. Not specifying the type on the left-hand side can actually avoid many bugs in complex environments, because there is no need to update the code that uses some API, whenever someone changes the type of the result, unless the new type causes some type mismatch error elsewhere, where it would be reported, allowing to make fixes at the right locations in the source code, not at the spurious locations of variable definitions, where updating the type will not prevent the real bugs to occur at the points of use of that variable.

The only programming languages that could be used without the ability of searching the definition of any symbol, were the early versions of FORTRAN and BASIC, where the type of a symbol was encoded in the name of the symbol, by using a one-letter prefix in FORTRAN (like IVAR vs. XVAR) and a one-symbol suffix in BASIC (like X vs. X$ vs. X%).

The "Hungarian" convention for names used in early Microsoft Windows has been another attempt of encoding the types of the symbols in their names, following the early FORTRAN and BASIC style, but most software developers have disliked this verbosity.


> if you don’t have an explicit type hint in a variable declaration, even readers that are using an IDE have to do TWO jump-to-definition actions to read the source of the variable type.

This isn’t necessarily the case. “Go to Definition” on the `val` goes to the definition of the deduced type in the IDEs and IDE-alikes I’ve ever used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: