This is completely wrong, as the other commenter stated. This would certainly fall under the work-product doctrine [1,2], where documents prepared for the purpose of future litigation are protected from discovery and could be considered analogous to attorney-client privilege (but is actually much more broadly defined than attorney-client privilege[4]). Even if opposing counsel is able to obtain discovery on a work-product, only fact based products, not opinion based are allowed. In other words, the court removes anything related to “mental impressions, conclusions, opinions, or legal theories of an attorney or other representative of a party concerning the litigation” [3]. For conversations with AI about how to conduct your case, that would exclude basically everything since it is an opinion work-product, not a fact work product. A fact based work-product would be things like “statements or interviews of now deceased witnesses, photographs or video of an accident scene taken at the time of the accident”[4].
> They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.
They mention guessing phone numbers, and then the API call for sending the OTP... literally just returns the OTP.
Yeah, I guess there's no reason for the API to ever return the OTP, but the severity depends on how you call the API. If the API is `api.cercadating.com/otp/<unpredictable-40-character-token>`, then that's not so bad. If it's `api.cercadating.com/otp/<guessable four-digit number>` that's a different story.
From context, I assume it's closer to the latter, but it would have been helpful for the author to explain it a bit better.
Hi, author here! My bad if that was not clear. The endpoint was just a POST request where the body was the phone number, so that is all you needed to know to take over someone's account.
I think it could be a tad bit clearer. I understand what you are saying but this thread requires reading multiple messages, parsing out the wrong parts, and putting together the correct ones to fully understand.
Put very simply, they exposed an endpoint that took a phone number as input to send a OTP code. That's reasonable and many companies do this without issue. The problem is, instead of just sending the OTP code they _returned the code to the client_ as well.
There is never a good reason to do this, it defeats the entire purpose. The only reason you send a code to a phone is for the user to enter to prove they "own" that phone number.
It's like having a secure vault but leaving a post-it note with the combination stuck to it.
Manufacturers make energy-efficient appliances because consumers prefer them over less efficient appliances, not because the government is forcing them.
If that is the case, then there should be no need for Energy Star, because manufacturers already have an incentive to increase efficiency due to customer demand.
I'm highly dubious of the $500b this program has claimed to save consumers. Almost without fail, efficiency improvements in home appliances has greatly increased complexity (less reliable), reduced common components across brands (higher cost to fix), increase purchase price, and compromised actual performance.
> If that is the case, then there should be no need for Energy Star, because manufacturers already have an incentive to increase efficiency due to customer demand.
Energy Star is handy because it's a known quality with set standards. If there was no standard companies could start doing all sorts of marketing BS with meaningless numbers. With ES you know what you're getting and it's a apples-apples comparison.
Same with dishwasher: Energy Star means some specific, and if a unit has a sticker than it's at (at least) the same level as another unit with the sticker. An OEM has now go above and beyond that in features.
Contrast that with the MaP Test, which is voluntary in the toilet industry, which most manufacturers use if those it is voluntary:
I'm fine with paying for it, but what's worse is that the notifications aren't end-to-end encrypted, and the plaintext passes through their server and Google's.
For some use cases where self hosting is required for compliance reasons, this is a deal breaker. And spinning your own mobile apps isn't quite practical.
We're very actively working on end-to-end encryption for notifications; it'll be in the Zulip 11.0 release this summer if at all possible.
While I'm here, the Flutter rewrite of the mobile app is launching next month, and while the initial launch won't add much functionality over the previous React Native apps, the rewrite is way faster, less buggy, and a lot more pleasant to add new features to.
Starlark does allow for much more concise and powerful config specification. I am building https://github.com/claceio/clace, which is an application server for teams to deploy internal tools.
That's such a weird characterization of this article, which (in contrast to other writing on this subject) clearly concludes (a) that Nix achieves a very high degree of reproducibility and is continuously improving in this respect, and (b) Nix is moreover reproducible in a way that most other distros (even distros that do well in some measures of bitwise reproducibility) are not (namely, time traveling— being able to reproduce builds in different environments, even months or years later, because the build environment itself is more reproducible).
The article you linked is very clear that both qualitatively and quantitatively, NixOS has made achieved high degrees of reproducibility, and even explicitly rejects the possibility of assessing absolute reproducibility.
NixOS may not be the absolute leader here (that's probably stagex, or GuixSD if you limit yourself to more practical distros with large package collections), but it is indeed very good.
> NixOS may not be the absolute leader here (that's probably stagex, or GuixSD if you limit yourself to more practical distros with large package collections), but it is indeed very good.
Could you comment on how stagex is? It looks like it might indeed be best in class, but I've hardly heard it mentioned.
The Bootstrappable Builds folks created a way to go from only an MBR of (commented) machine code (plus a ton of source) all the way up to a Linux distro. The stagex folks built on top of that towards OCI containers.
And with even a little bit of imagination, it's easy to think of other possible measures of degrees of reproducibility, e.g.:
• % of deployed systems which consist only of reproducibly built packages
• % of commonly downloaded disk images (install media, live media, VM images, etc.) consist only of reproducibly built packages
• total # of reproducibly built packages available
• comparative measures of what NixOS is doing right like: of packages that are reproducibly built in some distros but not others, how many are built reproducibly in NixOS
• binary bootstrap size (smaller is better, obviously)
It's really not difficult to think of meaningful ways that reproducibility of different distros might be compared, even quantitatively.
Sure, but in terms of absolute number of packages that are truly reproducible, they outnumber Debian because Debian only targets reproducibility for a smaller fraction of total packages & even there they're not 100%. I haven't been able to find reliable numbers for Fedora on how many packages they have & in particular how many this 99% is targeting.
By any conceivable metric Nix really is ahead of the pack.
Disclaimer: I have no affiliation with Nix, Fedora, Debian etc. I just recognize that Nix has done a lot of hard work in this space & Fedora + Debian jumping onto this is in no small part thanks to the path shown by Nix.
> Disclaimer: I have no affiliation with Nix, Fedora, Debian etc. I just recognize that Nix has done a lot of hard work in this space & Fedora + Debian jumping onto this is in no small part thanks to the path shown by Nix
This is completely the wrong way around.
Debian spearheaded the Reproducible Builds efforts in 2016 with contributions from SUSE, Fedora and Arch. NixOS got onto this as well but has seen less progress until the past 4-5 years.
The NixOS efforts owes the Debian project all their thanks.
> Arch Linux is 87.7% reproducible with 1794 bad 0 unknown and 12762 good packages.
That's < 15k packages. Nix by comparison has ~100k total packages they are trying to make reproducible and has about 85% of them reproducible. Same goes for Debian - ~37k packages tracked for reproducible builds. One way to lie with percentages is when the absolute numbers are so disparate.
> This is completely the wrong way around. Debian spearheaded the Reproducible Builds efforts in 2016 with contributions from SUSE, Fedora and Arch. NixOS got onto this as well but has seen less progress until the past 4-5 years. The NixOS efforts owes the Debian project all their thanks.
Debian organized the broader effort across Linux distros. However the Nix project was designed from the ground up around reproducibility. It also pioneered architectural approaches that other systems have tried to emulate since. I think you're grossly misunderstanding the role Nix played in this effort.
> That's < 15k packages. Nix by comparison has ~100k total packages they are trying to make reproducible and has about 85% of them reproducible. Same goes for Debian - ~37k packages tracked for reproducible builds. One way to lie with percentages is when the absolute numbers are so disparate.
That's not a lie. That is the package target. The `nixpkgs` repository in the same vein package a huge number of source archives and repackages entire ecosystems into their own repository. This greatly inflates the number of packages. You can't look at the flat numbers.
> However the Nix project was designed from the ground up around reproducibility.
It wasn't.
> It also pioneered architectural approaches that other systems have tried to emulate since.
This has had no bearing, and you are greatly overestimating the technical details of nix here. It's fundamentally invented in 2002, and things has progressed since then. `rpath` hacking really is not magic.
> I think you're grossly misunderstanding the role Nix played in this effort.
I've been contributing to the Reproducible Builds effort since 2018.
I think people are generally confused the different meanings of reproducibility in this case. The reproducibility that Nix initially aimed at is: multiple evaluations of the same derivations will lead to the same normalized store .drv. For a long time they were not completely reproducible, because evaluation could depend on environment variables, etc. But flakes have (completely ?) closed this hole. So, the reproducibility in Nix means that evaluating the same package set will lead to the same set of build recipes (.drvs).
However, this doesn't say much about build artifact reproducibility. A package set could always evaluate to the same drvs, but if all the source packages choose what to build based on random() > 0.5, then there is no of build artifacts at all. This type of reproducibility is spearheaded by Debian and Arch more than Nix.
For development, "localhost" has a convenience bonus: it has special treatment in browsers. Many browser APIs like Service Workers are only available on pages with a valid WebPKI cert, except for localhost.
Yeah, I've been using localhost domains on Linux for a while. Even on machines without systemd-resolved, you can still usually use them if you have the myhostname module in your NSS DNS module list.
I ended up writing a similar plugin[1] after searching in vain for a way to add temporary DNS entries.
The ability to add host entries via an environment variable turned out to be more useful than I'd expected, though mostly for MITM(proxy) and troubleshooting.
Bring up dates and times if you want to wreak havoc on any AI. :D
Developers around the world's most beloved topic, how to handle date and time correctly, is still a topic of great misunderstanding. AI and AI agents are no different from that. LLM seems to help a little, but only if you know what you are doing, as it usually needs to be the case.
Some things won't change so fast; at one point or another, data must match certain building blocks.
Google AI Overview incorrectly identified the day for a given date due to a timezone conversion issue, likely using PST instead of IST. ChatGPT and Perplexity provided more accurate and detailed responses.
One would think the arcana of time zones and the occasional leap second would not interfere with an individual setting egg timers often enough to become a burden
Except that's not the problem. its basic comprehension of requests. They aren't getting the wrong time, they try to play music, or the phone says "no timers playing" while the google home WILL NOT STOP until you lock the phone. etc.
Its basically an embarrassment for a project that's been alive this long from such a major.
Supporting existing projects is the oilsand mining of the promotion world. Low, old buzzword content, little reward. Implenting new buzzwords, wirh streetcred rich frameworks , thats the fracking.
Efficiency ,capabilties or customer satisfaction are irrelevant .
Phones today cannot even reliable handle things like "remind me to pick up tomatoes next time i am at a store"
google knows perfectly well, where I am and wants me to add 'infos' to locations and businesses the second I arrive (just got a notification today), but reminders like these are unavailable.
The location based reminders sure worked perfectly fine many years ago, like when I had Nexus phones. It's just getting worse all the time, I don't get it.
And worked with Samsung Bixby. Gemini, even after getting Advanced, is just terrible for a phone AI. I need to set a lot of alarms and calendar events, I don't need to do crazy photoshop (which Gemini is admittedly good at).
My own hands and cheap alarm clocks, or a piece of paper, have been working reliably for several decades. They also don't stop working when a corporation decides they want to hype something.
reply