that blogpost seems to confirm that they sell data? The direct quote:
> Mozilla doesn’t sell data about you (in the way that most people think about “selling data”)
> (CCPA) defines “sale” as the [...] in exchange for “monetary” or “other valuable consideration.”
> Whenever we share data with our partners, we put a lot of work into making sure that the data that we share is stripped of potentially identifying information, or shared only in the aggregate, or is put through our privacy preserving technologies
So... they are "sharing data with partners" + "in exchange for “monetary” or “other valuable consideration.”". Sounds like "selling data" to me, and I am not sure who are those "most people" who would think otherwise.
(the fact that the data is stripped from PII is nice, but does not really change much, it's still selling my data)
Well you ask a good question, but the examples they give ("optional ads" and "sponsored suggestions") only necessarily imply sharing some aggregates.
Like, for example, "we have this many users in this, that and that countries" - information which ads brokers might require to draw up a contract.
I suppose this is a change from the original "never and nothing" promise, but still a fair distance from the idea of selling of data that most people would imagine, like tracking and sharing your individual browser history.
[I guess I'm biased in favor of Mozilla. If they kick it, among full-featured browser engines only Chromium remains.]
Well, I can see why sharing aggregates will be required, but then they go and explicitly say that share aggregates _or_ share data stripped of PI _or_ share data via "privacy preserving technologies". The latter two options sound exactly like selling the data that most people would imagine.
And note that they don't tell anything about future or current data usage, they are just giving _examples_, which are, by definition, not the whole set. For all that I know they might be selling slightly stripped data already or plan to start very soon - the message does not contradict this.
Rich Hickey discusses the complexities of optionality in programming, particularly in Clojure’s spec system, emphasizing the need for clear schemas and handling of partial information.
Highlights
* Community Engagement: Acknowledges the presence of both newcomers and regulars at the event.
* Fashion Sense: Introduces a humorous take on the programming roadmap focused on fashion.
* Language Design: Explores the challenges of language design, especially regarding optionality in functions.
* Null References: Cites Tony Hoare’s “billion-dollar mistake” with null references as a cautionary example.
* Spec Improvements: Discusses plans to enhance Clojure’s spec system, focusing on schema clarity and usability.
* Aggregate Management: Emphasizes the importance of properly managing partial information in data structures.
* Future Development: Outlines future directions for Clojure’s spec, prioritizing flexibility and extensibility.
Key Insights
* Community Connection: Engaging with both veteran and new attendees fosters a collaborative environment, enhancing knowledge sharing and community growth.
* Humorous Approach: Infusing humor into technical discussions, like fashion choices, can make complex topics more relatable and engaging.
* Optionality Complexity: The management of optional parameters in programming languages is intricate, requiring careful design to avoid breaking changes.
* Null Reference Risks: Highlighting the historical pitfalls of null references serves as a reminder for developers to consider safer alternatives in language design.
* Schema Clarity: Clear definitions of schemas in programming can significantly improve code maintainability and reduce errors related to optional attributes.
* Information Aggregation: Understanding how to manage and communicate partial information in data structures is crucial for creating robust applications.
* Spec Evolution: Continuous improvement of the spec system in Clojure will enhance its usability, allowing developers to better define and manage their data structures.
MoltenVK is also a thing. Whatever small translation overhead it incurs is probably not that important for a text editor. And then you get a cross-platform API: not just Linux, but Windows as well. Maybe also other more niche OSes as well.
Molten VK is amazing. When I started working with it, I was expecting a lot of caveats and compromises, but it's shockingly similar to just using Vulkan that you can easily forget that there's a compatibility tool in play.
Probably you can squeeze a bit of optimization out of using Metal directly, but I think it's a more than viable approach to start with Vulkan/MoltenVK as a target, and add a Metal branch to the renderer when capacity allows (although you might never feel the need)
When they do "realize" that, B2B companies end up building products that are evaluated by procurement managers but not end users. Lots of examples in the industry.
So I'd really rather most didn't come to that realization. And, well, we as developers do have some influence.
To be fair, sometimes meticulous users investigate the bugs and write down logical chains explaining the causes and even offer a solution at the end (which they can't apply for the lack of commit access, for instance).
The proposed solution isn't always right, of course, but it would be incorrect to say that no bug reports come with a diagnosed cause. But that's exactly where a conscious reviewer is most needed, I believe.
I sometimes write a detailed bug reports but not a PR when there are different ways to address the problem (and all look bad to me) or the fix can introduce new problems. But I would expect LLM to ignore tradeoffs and choose an option which not necessarily the best for the same reason I hesitate - luck of understanding of this specific project.
Speaking of text editors and tools like that, you can often avoid having tests (or postpone adding them for a long time), if the logic is on the main execution path, meaning you'll execute it every time you run the program, and whatever failures that can happen, are reasonably easy to pinpoint (i.e. the program shows error backtraces or somehow traces problems otherwise).
This is from my experience hacking on Emacs, naturally.
At the same time, projects that you might ship for an employer or a client, are more critical to check for correctness before deploying, and are often more complex to run and check manually on the regular than writing at least one "happy path" integration test at least for the main scenario (or several).