I guess with project of this importance, one cannot have anything else than fixed release date, even if the scope is dynamic. A bit like Olympic Games hosts are decided several years in advance (Salt Lake City 2034).
The whole point (or at least the main point) of the tax paperwork is to be able to produce them to tax investigators.
If you don't want to share anything, then it's easier not to do the accounting. Which I guess is severally illegal globally.
Being unable/unwilling to produce mandatory records is fraud. Technical measures to be unable to produce records (e.g. offshore and encrypted archival) are evidence of criminal intent and possibly separate crimes.
So why did every company in the world start auto-deleting emails ~10 years ago? I don't believe many people were sued for fraud. These days cloud services have auto-delete based on time functionality?
It's called "object lifecycle management", because I guess fraud was too catchy.
Strange, because that sounds reasonable but the reasoning doesn't actually work, does it?
Either "the law" can be trusted, and there's no point to deleting data after a cut-off date, or the reverse is true and you're no worse off getting caught deleting data.
I believe the law actually provides a middle ground. You're liable for tax fraud for X years, but you're allowed to delete the data after Y years. Since X > Y you make it much harder for the tax office to sue you if you delete data. Plus make it pointless for them to use their other investigative powers against you, which is in reality more important, especially for smaller firms.
> the team tested it on 20 million prompts given to Gemini. Half of those prompts were routed to the SynthID-Text system and got a watermarked response, while the other half got the standard Gemini response. Judging by the “thumbs up” and “thumbs down” feedback from users, the watermarked responses were just as satisfactory to users as the standard ones.
Three comments here:
1. I wonder how many of the 20M prompts got a thumbs up or down. I don't think people click that a lot. Unless the UI enforces it. I haven't used Gemini, so I might be unaware.
2. Judging a single response might be not enough to tell if watermarking is acceptable or not. For instance, imagine the watermarking is adding "However," to the start of each paragraph. In a single GPT interaction you might not notice it. Once you get 3 or 4 responses it might stand out.
3. Since when Google is happy with measuring by self declared satisfaction? Aren't they the kings of A/B testing and high volume analysis of usage behavior?
My timesheet SAAS constantly asks for feedback, which I give 0/10 as constantly asking for feedback really annoys me.
They then contact me and ask me why, so I tell them then they say there is nothing they can do. A week later I’ll get a pop up asking for feedback and we go round the same loop again.
> It has also open-sourced the tool and made it available to developers and businesses, allowing them to use the tool to determine whether text outputs have come from their own large language models (LLMs), the AI systems that power chatbots. However, only Google and those developers currently have access to the detector that checks for the watermark.
These two sentences next to each other don't make much sense. Or are misleading.
Yeah. I know. Only the client is open source and it calls home.