I say this as someone who has been cautioning about Microsoft's ownership of GitHub for years now... but the Zig community has been high drama lately. I thought the Rust community had done themselves a disservice with their high tolerance of drama, but lately Zig seems to me to be more drama than even Rust.
I was saddened to see how they ganged up to bully the author of the Zig book. The book author, as far as I could tell, seems like a possibly immature teenager. But to have a whole community gang up on you with pitch forks because they have a suspicion you might use AI... that was gross to watch.
I was already turned off by the constant Zig spam approach to marketing. But now that we're getting pitchfork mobs and ranty anti-AI diatribes it just seems like a community sustaining itself on negative energy. I think they can possibly still turn it around but it might involve cleaning house or instituting better rules for contributors.
What makes you say that? Couldn’t it be an immature adult?
> because they have a suspicion you might use AI
Was that the reason? From what I remember (which could definitely be incomplete information) the complaint was that they were clearly using AI while claiming no AI had been used, stole code from another project while claiming it was their own, refused to add credit when a PR for that was made, tried to claim a namespace on open-vsx…
At a certain point, that starts to look outright malicious. It’s one thing to not know “the rules” but be willing to fix your mistakes when they are pointed out. It’s an entirely different thing to lie, obfuscate, and double down on bad attitude.
I’m a Zig outsider. I gathered the context from reading the conversation around it, most of it posted to HN. Which is why I also pointed out I may have incomplete information.
If one looks past the immediate surface, which is a prerequisite to form an informed opinion, Zigbook is the one who clearly looks bad. The website is no longer up, even, now showing a DMCA notice.
The way these sorts of things look to outsiders depends on the set of facts that are presented to those outsiders.
Choosing to focus on the existence of drama and bullying without delving into the underlying reason why there was such a negative reaction in the first place is kind of part and parcel to that.
At best it's the removal of context necessary to understand the dynamics at play, at worst it's a lie of omission.
The claims of AI use were unsubstantiated and pure conjecture, which was pointed out by people who understand language, including me. Now it appears that the community has used an MIT attribution violation to make the Zigbook author a victim of DMCA abuse.
That doesn't look great to me. It doesn't look like a community I would encourage others to participate in.
> tried to claim a namespace on open-vsx
It seems reasonable for the zigbook namespace to belong to the zigbook author. That's generally how the namespaces work right? https://github.com/search?q=repo%3Aeclipse%2Fopenvsx+namespa...https://github.com/eclipse/openvsx/wiki/Namespace-Access. IMO, this up there with the "but they were interested in crypto!" argument. The zigbook author was doing normal software engineer stuff, but somehow the community tries to twist it into something nefarious. The nefariousness is never stated because it's obviously absurd, but there's the clear attempt to imply wrongdoing. Unfortunately that just makes the community look as if they're trying hard to prosecute an innocent person in the court of public opinion.
> At a certain point, that starts to look outright malicious.
Malicious means "having the nature of or resulting from malice; deliberately harmful; spiteful". The Zig community looks malicious in this instance to me. Like you, I don't have complete information. But from the information I have the community response looked malicious, punitive, harassing and arguably defamatory. I don't think I've ever seen anything like it in any open source community.
Again, prior to the MIT attribution claim there was no evidence the author of Zigbook had done anything at all wrong. Among other things, there was no evidence they had lied about the use of AI. Malicious and erroneous accusations of AI use happen frequently these days, including here on HN.
Judging by the strength of the reaction, the flimsiness of the claims and the willingness to abuse legal force against the zigbook author, my hunch is that there is some other reason zigbook was controversial that isn't yet publicly known. Given the timing it possibly has to do with Anthropic's acquisition of Bun.
> It seems reasonable for the zigbook namespace to belong to the zigbook author. That's generally how the namespaces work right?
Yes. Bad actors try to give themselves legitimacy by acquiring as many domains and namespaces as quickly and as soon as they can with as little work as possible. The amount of domains they bought raised flags for me.
> IMO, this up there with the "but they were interested in crypto!" argument.
No idea what you’re talking about. Was the Zigbook author interested in cryptocurrency and criticised for it?
> The nefariousness is never stated because it's obviously absurd, but there's the clear attempt to imply wrongdoing.
That’s not true. It was stated repeatedly and explicitly.
Them stealing code, claiming it as their own, refusing to give attribution and editing third-party comments to make it seem the author is saying they are “autistic and sperging” is OK with you?
You really see nothing wrong with that and think criticising such behaviour is flimsy and absurd?
> I don't think I've ever seen anything like it in any open source community.
I’m certainly not excusing bad behaviour, but this wouldn’t even fall into the top 100 toxic behaviours in open-source. Plenty of examples online and submitted to HN over the years.
> Malicious and erroneous accusations of AI use happen frequently these days, including here on HN.
I know. I’m constantly arguing against it especially when I see someone using the em-dash as the sole argument. I initially pushed back against the flimsy claims in the Zigbook submission, but quickly the evidence started mounting and I retracted it.
> Given the timing it possibly has to do with Anthropic's acquisition of Bun.
I don’t buy it. The announcement of the acquisition happened after.
I think if you take a step back and try to fight against confirmation bias you'll see that the arguments you're making are very weak.
You are also moving the goal posts. You started with it was sketchy to claim a namespace now you're moving to it's sketchy to own domains. Of course people are going to buy variants on their domains.
This is easily in the top 5 most toxic moments in open source, and off the top of my head seems like #1. For all you know this is some kid in a country with a terrible job market trying to create a resource for the community and get their name out there. And the Zig community tried to ruin his life because they whipped themselves into a frenzy and convinced themselves there were secret signs that an AI might have been used at some point.
I've never seen an open source community gang up like that to bully someone based on absolutely no evidence of any wrong doing except forgetting to include an attribution for 22 lines of code. That's the sort of issue that happens all the time in open source and this is the first time I've seen it be used to try to really hurt someone and make them personally suffer. The intentional cruelty and the group of stronger people deliberately picking on a weaker person is what makes it far worse to me than the many other issues in open source of people behaving impolitely.
This is an in-group telling outsiders they're not welcome and, not only that, if we don't like you we'll hurt you.
And yes there have been repeated mentions of their interest in crypto, including in this thread.
> You are also moving the goal posts. You started with it was sketchy to claim a namespace now you're moving to it's sketchy to own domains.
Please don’t distort my words. That is a bad faith argument. I never claimed it was “sketchy to claim a namespace”, I listed the grievances other people made. That’s what “From what I remember (…) the complaint was” means. When I mentioned the domains, that was something which looked fishy to me. There’s no incongruence or goal post moving there. Please argue in good faith.
> For all you know this is some kid in a country with a terrible job market trying to create a resource for the community and get their name out there.
And for all you know, it’s not. Heck, for all I know it could be you. Either way it doesn’t excuse the bad behaviour, which is plenty and documented. All you have in defence is speculation which even if true wouldn’t justify anything.
You may not have seen this as I added the context after posting, so I’ll repeat it here:
> Them stealing code, claiming it as their own, refusing to give attribution and editing third-party comments to make it seem the author is saying they are “autistic and sperging” is OK with you?
> You really see nothing wrong with that and think criticising such behaviour is flimsy and absurd?
Please answer that part. Is that OK with you? Do you think that is fine and excusable? Do you think that’s a prime example of someone “trying to create a resource for the community”? Is that not toxic behaviour?
Criticise the Zig community all you want, but pay attention to the person you’re so fervently defending too.
> I was saddened to see how they ganged up to bully the author of the Zig book. The book author, as far as I could tell, seems like a possibly immature teenager. But to have a whole community gang up on you with pitch forks because they have a suspicion you might use AI... that was gross to watch.
Your assumption is woefully incorrect. People were annoyed, when the explicit and repeated lie that the AI generated site he released which was mostly written by AI, was claimed to be AI free. But annoyed isn't why he was met with the condemnation he received.
In addition to the repeated lies, there's the long history of this account of typosquatting various groups, many, many crypto projects, the number of cursor/getcursor accounts, the license violation and copying code without credit from an existing community group (with a reputation for expending a lot of effort, just to help other zig users), the abusive and personal attack editing the PR asking, for nothing but crediting the source of the code he tried to steal. All the while asking for donations for the work he copied from others.
All of that punctuated by the the fact he seems to have plans to typo squat Zig users given he controls the `zigglang` account on github. None of this can reasonable be considered just a simple mistake on a bad day. This is premeditated malicious behavior from someone looking to leach off the work of other people.
People are mad because the guy is a selfish asshole, who has a clear history of coping from others, being directly abusive, and demonstrated intent to attempt to impersonate the core ziglang team/org... not because he dared to use AI.
I do think that it was weird to focus on the AI aspect so much. AI is going to pollute everything going forward whether you like it or not. And honestly who cares, either it is a good ressource for learning or it’s not. You have to decide that for yourself and not based on whether AI helped writing it.
However I think some of the critique was because he stole the code for the interactive editor and claimed he made it himself, which of course you shouldn’t do.
You can correct me if I'm wrong, but I believe the actual claim was that Zigbook had not complied with the MIT license's attribution clause for code someone believed was copied. MIT only requires attribution for copies of "substantial portions" of code, and the code copied was 22 lines.
Does that count as substantial? I'm not sure because I'm not a lawyer, but this was really an issue about definitions in an attribution clause over less code than people regularly copy from stack overflow without a second thought. By the time this accusation was made, the Zigbook author was already under attack from the community which put them in a defensive posture.
Now, just to be clear, I think the book author behaved poorly in response. But the internet is full of young software engineers who would behave poorly if they wrote a book for a community and the community turned around and vilified them for it. I try not to judge individuals by the way they behave on their worst days. But I do think something like a community has a behavior and culture of its own and that does need to be guided with intention.
> You can correct me if I'm wrong, but I believe the actual claim was that Zigbook had not complied with the MIT license's attribution clause for code someone believed was copied. MIT only requires attribution for copies of "substantial portions" of code, and the code copied was 22 lines.
Without including proper credit, it is classic infringement. I wouldn't personally call copyright infringement "theft", though.
Imagine for a moment, the generosity of the MIT license: 'you can pretty much do anything you want with this code, I gift it to the world, all you have to do is give proper credit'. And so you read that, and take and take and take, and can't even give credit.
> Now, just to be clear, I think the book author behaved poorly in response
Precisely: maybe it was just a mistake? So, the author politely and professionally asks, not for the infringer to stop using the author's code, but just to give proper credit. And hey, here's a PR, so doing the right thing just requires an approval!
The infringer's response to the offer of help seemed to confirm that this was not a mistake, but rather someone acting in bad faith. IMO, people should learn early on in their life to say "I was wrong, I'm sorry, I'll make it right, it won't happen again". Say that when you're wrong, and the respect floods in.
> By the time this accusation was made, the Zigbook author was already under attack
This is not quite accurate, from my recollection of events (which could be mistaken!): the community didn't even know about it until after the author respectfully, directly contacted the infringer with an offer to help, and the infringer responded with hostility and what looked like a case of Oppositional Defiant Disorder.
> I do think that it was weird to focus on the AI aspect so much. AI is going to pollute everything going forward whether you like it or not.
The bigger issue is that they claimed no AI was used. That’s an outright lie which makes you think if you should trust anything else about it.
> And honestly who cares, either it is a good ressource for learning or it’s not. You have to decide that for yourself and not based on whether AI helped writing it.
You have no way of knowing if something is a good resource for learning until you invest your time into it. If it turns out it’s not a good resource, your time was wasted. Worse, you may have learned wrong ideas you now have to unlearn. If something was generated with an LLM, you have zero idea which parts are wrong or right.
I agree with you. It is shitty behavior to say it is not AI written when it clearly is.
But I also think we at this point should just assume that everything is partially written using AI.
For your last point, I think this was also a problem before LLMs. It has of course become easier to fake some kind of ethos in your writing, but it is also becoming easier to spot AI slop when you know what to look after right?
> I agree with you. It is shitty behavior to say it is not AI written when it clearly is.
> But I also think we at this point should just assume that everything is partially written using AI.
Using "but" here implies your 2nd line is a partial refutation to the first. No one would have been angry if he'd posted it without clearly lying. Using AI isn't what pissed anyone off, being directly lied to (presumably to get around the strict "made by humans" rules across all the various Zig communities). Then there was the abusive PR edits attacking someone that seems to have gotten him banned. And his history of typosquatting, both various crypto surfaces, and cursor, and the typosquatting account for zigglang. People are mad because the guy is a selfish asshole, not because he dared to use AI.
Nothing I've written has been assisted by AI in any way, and I know a number of people who do and demand the same. I don't think it's a reasonable default assumption.
You're assuming they are a teenager but you don't know. They used code without attribution and when asked to do so, they edited the comment and mocked the requestor. And you're calling the zig community the bully? They lied about not using AI. This kind of dishonesty does not need to be tolerated.
Disservice? Rust is taking over the world while they still have nothing to show basically (Servo, the project Rust was created for, is behind ladybird of all things). Every clueless developer and their dog thinks Rust is like super safe and great, with very little empirical evidence still after 19 years of the language's existence.
Zig people want Zig to "win". They are appearing on Hacker News almost every day now, and for that purpose this kind of things matters more than the language's merits themselves. I believe the language has a good share of merits though, far more than Rust, but it's too early and not battle tested to get so much attention.
FWIW, all of those links compare Rust to languages created before 1980, and are all projects largely and unusually independent of the crates ecosystem and where dynamic linking does not matter. If you're going to use a modern language anyway, you should do due diligence and compare it with something like Swift as the ladybird team is doing right now, or even a research language like Koka. There is a huge lack of evidence for Rust vs other modern languages and we should investigate that before we lock ourselves into yet another language that eventually becomes widely believed to suck.
Microsoft isn't going to abandon C#, it's just using the right tool for the right job. While there are certainly cases where it is justified to go lower level and closer to the metal, writing everything in Rust would be just as dumb as writing everything in C# or god forbid, JS.
Big Brother is a reference to George Orwell's critique of Communism in Nineteen Eighty-Four.
Qwen is a video model trained by a Communist government, or technically by a company with very close ties to the Chinese government. The Chinese government also has laws requiring AI be used to further the political goals of China in particular and authoritarian socialism in general.
In the light of all this, I think it's reasonable to conclude that this technology will be used for Big Brother type surveillance and quite possible that it was created explicitly for that purpose.
Just nitpicking here, but 1984 is a critique of totalitarianism. The only references to systems of government in the book refer to "The German Nazis and the
Russian Communists".
Orwell was a democratic socialist. He was opposed to totalitarian politics, not communism per se.
It's true that it's about totalitarianism to some extent. But we have Orwell's actual words here that it's chiefly about communism
> [Nineteen Eighty-Four] was based chiefly on communism, because that is the dominant form of totalitarianism, but I was trying chiefly to imagine what communism would be like if it were firmly rooted in the English speaking countries, and was no longer a mere extension of the Russian Foreign Office.
And of course Animal Farm is only about communism (as opposed to communism + fascism). And the lesser known Homage to Catalonia depicts the communist suppression of other socialist groups.
By all this I just mean to say when you're reading Nineteen Eighty-Four what he's describing is barely a fictionalization of what was already going on in the Soviet Union. There's just not a lot in the book that is specifically Nazi or Fascist.
I don't have any opinion on whether he thought there were non-totalitarian forms of communism.
I think that Orwell understood his own people much more than Russians, so it might be useful, while reading him, to take a look at the mirror as well..
The writeup makes it sound like an acquihire, especially the "what changes" part.
ChatGPT is feeling the pressure of Gemini [0]. So it's a bit strange for Anthropic to be focusing hard on its javascript game. Perhaps they see that as part of their advantage right now.
It's not just less literate, it's also people who feel the need to be amateur prosecutors.
It's the same thing as judging people who wear their hair too long, or wear pajamas on the plane, or who wear pants that are too baggy, or who have children out of wedlock, etc. Some people are deeply convinced that society is on the decline and that they have a mission to ensure everyone else stays in line.
Google played a role in popularizing the microservice approach.
When I was at Google, a microservice would often be worked on with teams of 10-30 people and take a few years to implement. A small team of 4-5 people could get a service started, but it would often take additional headcount to productionize the service and go to market.
I have a feeling people overestimate how small microservices are and underestimate how big monorepos are. About 9 times out of ten when I see something called a monorepo it's for a single project as opposed to a repo that spans multiple projects. I think the same is true of microservices. Many things that Amazon or Google considers microservices might be considered monoliths by the outside world.
Kubernetes is a good example of a microservice architecture. It was designed in a way where each microservice work with other microservices in a way where the dependencies are not so coupled together.
For example, the API server only reads and writes resources to etcd. A separate microservice called the scheduler does the actual assignment of pods to nodes by watching for changes in the resource store against available nodes. And yet a different microservice that lives on each node accepts the assignment and boots up (or shuts down) pods assigned to its node. It is called the kublet. The API server does none of that.
You can run the kublet all on its own, or even replace it to change part of the architecture. Someone was building a kublet that uses systemd instead of docker, and Fly.io (who seems to hate kubernetes) wrote a kublet that could stand things up using their edge infrastructure.
The API server also does some validations, but it also allows for other microservices to insert itself into the validation chain through pod admission webhooks.
Other examples: deployment controllers, replicaset controllers, horizontal pod autoscalers, and cluster autoscalers to work independently of each other yet coordinated together to respond to changing circumstances. Operators are microservices that manage a specific application component, such as redis, rabbitmq, Postgresql, tailscale, etc.
One of the big benefits of this is that Kubernetes become very extensible. Third-party vendors can write custom microservices to work with their platform (for example, storage interfaces for GCP, AWS, Azure, or Ceph, etc). An organization implementing Kubernetes can tailor it to fit their needs, whether it is something minimal or something operating in highly regulated markets.
Ironically, Kubernetes is typically seen and understood by many to be a monolith. Kubernetes, and the domain it was designed to solve is complex, but incorrectly understanding Kubernetes as a monolith creates a lot of confusion for people working with it.
That's the good old two pizza team service oriented architecture that Amazon is known for. Microservices are much smaller than that. At current job I think we have slightly more microservices than engineers on the team.
> At current job I think we have slightly more microservices than engineers on the team.
You are free to do that, but that's a very specific take on microservices that is at odds with the wider industry. As I said above, what I was describing is what Google referred to internally as microservices. Microservices are not smaller than that as a matter of definition, but you can choose to make them extra tiny if you wish to.
If you look at what others say about microservices, it's consistent with what I'm saying.
For example, Wikipedia gives as a dichotomy: "Service-oriented architecture can be implemented with web services or Microservices." By that definition every service based architecture that isn't built on web services is built on microservices.
Google Cloud lists some examples:
> Many e-commerce platforms use microservices to manage different aspects of their operations, such as product catalog, shopping cart, order processing, and customer accounts.
Each of these microservices is a heavy lift. It takes a full team to implement a shopping cart correctly, or customer accounts. In fact each of these has multiple businesses offering SaaS solutions for that particular problem. What I hear you saying is that if your team were, for example, working on a shopping cart, they might break the shopping cart into smaller services. That's okay, but that's not in any way required by the definition of microservices.
> Model services around the business domain. Use DDD to identify bounded contexts and define clear service boundaries. Avoid creating overly granular services, which can increase complexity and reduce performance.
That was when he had the legal expertise of the EFF to help him make his case. Later he decided to represent himself in court and failed
> This time, he chose to represent himself, although he had no formal legal training. On October 15, 2003, almost nine years after Bernstein first brought the case, the judge dismissed it....
> Later he decided to represent himself in court and failed
To be more specific, the government broke out their get out of court free card and claimed they weren't threatening to prosecute him even though they created a rule he was intending to violate. It's a dirty trick the government uses when they're afraid you're going to win so they can get the case dismissed without the court making a ruling.
D. J. Bernstein is very well respected and for very good reason. And I don't have firsthand knowledge of the background here, but the blog posts about the incident have been written in a kind of weird voice that make me feel like I'm reading about the US Government suppressing evidence of Bigfoot or something.
Stuff like this
> Wow, look at that: "due process".... Could it possibly be that the people writing the law were thinking through how standardization processes could be abused?"
is both accusing the other party of bad faith and also heavily using sarcasm, which is a sort of performative bad faith.
Sarcasm can be really effective when used well. But when a post is dripping with sarcasm and accusing others of bad faith it comes off as hiding a weak position behind contempt. I don't know if this is just how DJB writes, or if he's adopting this voice because he thinks it's what the internet wants to see right now.
Personally, I would prefer a style where he says only what he means without irony and expresses his feelings directly. If showing contempt is essential to the piece, then the Linus Torvalds style of explicit theatrical contempt is probably preferable, at least to me.
I understand others may feel differently. The style just gives me crackpot vibes and that may color reception of the blog posts to people who don't know DJT's reputation.
ECC is well understood and has not been broken over many years.
ML-KEM is new, and hasn't had the same scrutiny as ECC. It's possible that the NSA already knows how to break this, and has chosen not to tell us, and NIST plays the useful idiot.
NIST has played the useful idiot before, when it promoted Dual_EC_DRBG, and the US government paid RSA to make it the default CSPRNG in their crypto libraries for everyone else... but eventually word got out that it's almost certainly an NSA NOBUS special, and everyone started disabling it.
Knowing all that, and planning for a future where quantum computers might defeat ECC -- it's not defeated yet, and nobody knows when in the future that might happen... would you choose:
Option A): encrypt key exchange with ECC and the new unproven algorithm
Option B): throw out ECC and just use the new unproven algorithm
NIST tells you option B is for the best. NIST told you to use Dual_EC_DRBG. W3C adopted EME at the behest of Microsoft, Google and Netflix. Microsoft told you OOXML is a valid international standard you should use instead of OpenDocument (and it just so happens that only one piece of software, made by Microsoft, correctly reads and writes OOXML). So it goes on. Standards organisations are very easily corruptable when its members are allowed to have conflicts of interest and politick and rules-lawyer the organisation into adopting their pet standards.
> Standards organisations are very easily corruptable when its members are allowed to have conflicts of interest and politick and rules-lawyer the organisation into adopting their pet standards.
FWIW, in my experience on standardization committees, the worst example I've seen of rules-lawyering to drive standards changes is... what DJB's doing right now. There's a couple of other egregious examples I can think of, where people advocating against controversial features go in full rules-lawyer mode to (unsuccessfully) get the feature pulled. I've never actually seen any controversial feature make it into a standard because of rules-lawyering.
What exactly are you calling "rules-lawyering"? Is citing rules and pointing out their blatant violation "rules-lawyering"? If so, can you explain why it is better to avoid this, and what should be done instead?
As an outsider I'd understand it differently: reading rules and pointing out their lack of violation (perhaps in letter), when people feel like you violated it (perhaps in spirit), is what would be rules-lawyering. You're agreeing on what the written rules are, but interpreting actions as following vs. violating them.
That's quite different from an accusation of rules violation followed by silence or distortions or outright lies.
If someone is pointing out that you're violating the rules and you're lying or staying silent or distorting the facts, you simply don't get to dismiss or smear them with a label like "rules-lawyer". For rules to be followed, people have to be able to enforce them. Otherwise it's just theater.
Thank you, that seems to be the whole ball game for me right there. I understood the sarcastic tone as kind of exasperation, but it means something in the context of an extremely concerning attempt to ram through a questionable algorithm that is not well understood and risks a version of an NSA backdoor, and the only real protection would be integrity of standards adoptions processes like this one. You've really got to stick with the substance over the tone to be able to follow the ball here. Everyone was losing their minds over GDPR introducing a potential back door to encrypted chat apps that security agencies could access. This goes to the exact same category of concern, and as you note it has precedent!
So yeah, NSA potentially sneaking a backdoor into an approved standard is pretty outrageous, and worth objecting to in strongest terms, and when that risk is present it should be subjected to the highest conceiveable standard of scrutiny.
In fact, I found this to be the strongest point in the article - there's any number of alternatives that might (1) prove easier to implement, (2) prove more resilient to future attacks (3) turn out to be the most efficient.
Just because you want to do something in the future doesn't mean it needs to be ML-KEM specifically, and the idea of throwing out ECC is almost completely inexplicable unless you're the NSA and you can't break it and you're trying to propose a new standard that doesn't include it.
I understand the cryptography and I agree with his analysis of the cryptographic situation.
What I don't understand is why -- assuming he thinks this is important -- he's chosen to write the bits about the standardization process in a way that predisposes readers against his case?
LWE cryptography is probably better understood now than ECDH was in 2005, when Bernstein published Curve25519, but I think you'll have a hard time finding where Bernstein recommended hybrid RSA/ECDH key exchanges.
Sure! First, while I’m in no position to judge cryptographic algorithms, the success of cha-cha and 25519 speak for themselves. More prosaically, patriecia/critbit trees and his other tools are the right thing, and foresighted. He’s not just smart, but also prolific.
However, he’s left a wake of combative controversy his entire career, of the “crackpot” type the parent comment notes, and at some point it’d be worth his asking, AITA? Second, his unconditional support of Jacob Appelbaum has been bonkers. He’s obviously smart and uncompromising but, despite having been in the right on some issues, his scorched earth approach/lack of judgment seems to have turned his paranoia about everyone being out to get him into a self-fulfilling prophecy.
Well that's sure an argument. You get that I'm not the one who accused him, right? What you think of me has literally nothing to do with the claims Henry de Valence made. My guess is that these two documents (or maybe just the one you posted) are literally the first time you ever heard that name. Am I right?
>>> There is a committee at TU/e charged by law with ensuring proper
grading, and I have recently learned that claims by Mr. de Valence
related to this topic have been formally investigated and rejected by
that committee. Now that Mr. de Valence has issued public accusations,
it would seem that a public resolution will be necessary, starting with
Mr. de Valence making clear what exactly his accusations are.
He also points out that de Valence is himself likely guilty of academic misconduct based on his own admissions.
We have two people making contradictory statements. The only ways to resolve it are facts (which were presumably reviewed by the committee) and credibility. You clearly think de Valence is more credible because he’s one of your feline friends, and because your other feline friends accused Appelbaum of sexual crimes, and you hate that Bernstein worked with Appelbaum because in your mind a sexual abuse accusation is as good as guilt of sexual abuse.
de Valence chose the same credibility-destroying path as Lovecruft, Honeywell, et al. did: make serious accusations in the public sphere instead of letting our public institutions charged with addressing these type of accusations do their job. Wise people realize that you can’t be criminally charged for publishing a smear campaign online, but you can be criminally charged for filing a police report, and evaluate accordingly.
The same credibility-destroying path of questioning the conduct of your hero, I do get what you're saying, we don't have to belabor this. If you had a real argument you'd have presented it by now.
one of the current approaches is to turn communities against solar and wind projects on the grounds that it's racist or disturbs plant life etc. This has advocates of environmental justice, which is an important concern on its own, weaponized against building renewable energy.
I was saddened to see how they ganged up to bully the author of the Zig book. The book author, as far as I could tell, seems like a possibly immature teenager. But to have a whole community gang up on you with pitch forks because they have a suspicion you might use AI... that was gross to watch.
I was already turned off by the constant Zig spam approach to marketing. But now that we're getting pitchfork mobs and ranty anti-AI diatribes it just seems like a community sustaining itself on negative energy. I think they can possibly still turn it around but it might involve cleaning house or instituting better rules for contributors.
reply