I find this very disingenuous because the person you replied to was talking only about Iran, and stating that Iran is a theocracy in their opinion. They never mentioned anything about Iran, let alone stating that Israel isn't a theocracy.
So asking this question, this way, is quite strange in my opinion.
They said "theocracy with nukes screams nuke them first". If this is true - and it is their stated position - then, since Israel has nukes, either they are not a theocracy or they are begging to be nuked. The commenter has, I think reasonably, concluded that the other commenter doesn't think Israel is begging to be nuked, and is therefore addressing the apparent contradiction. It seems entirely genuous.
I think you and I read a different article. For example, about treating JSON keys without case sensitivity:
> In our opinion, this is the most critical pitfall of Go’s JSON parser because it differs from the default parsers for JavaScript, Python, Rust, Ruby, Java, and all other parsers we tested.
It would be kind of difficult to argue that this is not about Go.
Don't get me wrong, I love Go just as much as the next person, but this article definitely lays bare some issues with serialization specific to Go.
I think it's just kinda dumb parsing. E.g. JSON is an extremely simple spec. Most of those issues that the Go JSON parser has, are because of specific choices of the Go implementation, not about JSON. The fact that it allows case-insensitive key matching is just insane. Also that it parses invalid XML documents (with garbage) into valid structs without returning an error is very much a problem with the parser and not with XML.
> The fact that it allows case-insensitive key matching is just insane.
It's probably a side effect of what is IMO another bad design of that language: letter casing determining field visibility, instead of using a keyword or a sigil. If your field has to be named "User" to be public, and the corresponding entry in the JSON has all-lowercase "user" as the key (probably because the JSON was defined first, and most languages have "field names start with lowercase" as part of their naming conventions), you have to either ignore case when matching, or manually map every field. They probably wanted to be "intuitive" and not require manual mapping.
> If your field has to be named "User" to be public, and the corresponding entry in the JSON has all-lowercase "user" as the key
then you specify the key to be "user"? Isn't that the point of the ability to remap names? Except you can't, because you don't have a choice whether or not your data is deserialised with case sensitivity enabled or not.
I've written plenty of Rust code to turn camelCase into snake_case and "it's too much effort" has never been a problem. It's a minor bother that helps prevents real security issues like the ones listed in this article.
Even if you want to help lazy programmers, I don't think there's a good reason to confuse "User" and "uſER" by default.
I mean, one of the more silly things here is that even when you manually map, with the tag `json:"user"`, it still ignores the case of that while deserializing. It respects it while serializing though
I imagine that one of the points of a solid protocol buffers library would be to align the types even across programming languages. E.g. explicitly force a 64-bit integer rather than "int" relying on the platform. And to have some custom "string" type which is always UTF-8 encoded in memory rather than depending on the platform-specific encoding.
(I have no idea if that is the case with protobuf, I don't have enough experience with it.)
Again, the problem has more to do with the programming languages themselves, rather than with protobufs or parsing.
Protobuf has both signed and unsigned integers - the initial use case was C++ <-> C++ communication
Java doesn't have unsigned integers
Python has arbitrary precision integers
JavaScript traditionally only had doubles, which means it can represent integers up to 53 bit exactly. It has since added arbitrary size integers -- but that doesn't mean that the protobuf libraries actually use them
---
These aren't the only possibilities -- every language is fundamentally different
As long as a language has bytes and arrays, you can implement anything on top of them, like unsigned integers, 8-bit strings, UTF-8 strings, UCS-2, whatever you want. Sure it won't be native types, so it will probably be slower and could have an awkward memory layout, but it's possible
Granted, if a language is so gimped that it doesn't even have integers (as you mentioned JavaScript), then that language will not be able to fully support it indeed.
Unfortunately that doesn't solve the problem -- it only pushes it around
I recommend writing a protobuf generator for your favorite language. The less it looks like C++, the more hard decisions you'll have to make
If you try your approach, you'll feel the "tax" when interacting with idiomatic code, and then likely make the opposite decision
---
Re: "so gimped" --> this tends to be what protobuf API design discussion are like. Users of certain languages can't imagine the viewpoints of users of other languages
e.g. is unsigned vs. signed the way the world is? Or an implementation detail.
And it's a problem to be MORE expressive than C/C++ -- i.e. from idiomatic Python code, the protobuf data model also causes a problem
Even within C/C++, there is more than one dialect -- C++ 03 versus C++ 11 with smart pointers (and probably more in the future). These styles correspond to the protobuf v1 and protobuf v2 APIs
(I used both protobuf v1 and protobuf v2 for many years, and did a design review for the protobuf v3 Python API)
In other words, protobufs aren't magic; they're another form of parsing, combined with code generation, which solve some technical problems, and not others. They also don't resolve arguments about parsing and serialization!
If you use the same struct in both an HTTP API and an ORM, you're Doing It Wrong in my opinion. These should be completely separated. Exactly to prevent accidental leaking or injection of data.
I tend to disagree with that, also. :) Even within one codebase there's immense value in having separate structs/classes per "layer" or domain. E.g. a different set of structs for the database layer than for the "business layer" (or whatever your application's internal setup is).
When that boundary is moved to outside the application, so an HTTP API between microservices, I feel even more strongly (though indeed still not as strongly as in what you call a "public API").
E.g. I have seen plenty of times a situation where a bunch of applications were managed within one team, the team split up and now this "internal API" has become an API between teams, suddenly making it "public" (when viewed from the teams perspective).
That's the whole point of the first item in the article, and the original comment you were replying to. In Go (and some other languages) that formatting is implicit and automatic. So you need to write to code to NOT format the fields out. Which leads to the potential security issues where data is leaked, or you can inject data into "hidden" fields. So since it's implicit and automatic, it's safer to, as a rule, define separate structs for the data input and map them, so that there is absolutely no chance (implicit or explicit) to leak or inject data.
Thoughts go faster than fingers. It's already hard enough to keep up with my stream of thoughts when coding, when I'm touch typing pretty fast. I can't imagine my coding experience if I had to look at the keyboard to input my thoughts into the editor. It's subjective, everyone has a different experience, but I feel I would be severely impaired.
The Telegram clients are actually open source and I think pretty solid. The Matrix protocol is open, federated, and secure, but the clients are kinda janky. I've been playing with the idea of what if someone were to fork the Telegram clients and modify them to work with Matrix. That would be a "best of both worlds" situation I think.
Famitsu (a Japanese magazine) has top ten sales of console games in Japan every week. Until the Switch 2 launch, Mario Kart 8 was there nearly every week.
Wii Sports enjoyed a similar status, as it was also bundled with a console. I don't think console bundles are necessarily a fair way to count video game sales.
Mario Kart was not originally bundled with the Switch and the Switch has never been exclusively bundled with Mario Kart. You've always been able to buy a Switch by itself or bundled with other games - Mario Kart was not even the first bundle. It was like the 8th or something - two years after release, and has been re-issued every Black Friday since, but there have been numerous other bundles since as well.
A pro scene is absolutely not a sign of a popular game. Oftentimes it's the reverse. There are so many strange externalities with a healthy pro scene that can positively destroy your general appeal. Leaving you with perhaps 10,000 really insane players, and no community outside of that.
I’ve not gathered any data to prove it, but I’ve long held a hunch that there’s something of an inverse correlation between multiplayer games’ popularity among highly competitive players and the masses.
Most people don’t want to spend large amounts of time “getting good” and don’t enjoy getting matched up against players that absolutely destroy them, but instead prefer more casual games against other players with middling skills. The thing is though, even if highly competitive games include an unranked queue intended for casuals, it ends up being filled with smurfs[0] and the like looking to smash lower skilled players, which drains the fun from the game for those players. Thinking about it that way, it’d make perfect sense if the most popular PvP games would be those that are shunned by the highly competitive - a lack of “pro” players might be considered a feature rather than a bug.
An unranked queue is often just like "well, we didn't do any game design for you on meta-progression".
Normal players would like to participate in the progression systems you design! Having a ranked queue that is uninviting to normal players due to skill, and an unranked queue that is uninviting to everyone due to progression design, but less uninviting to normal players than the ranked queue, is a pretty suboptimal result.
It's lately become a lot more popular to just secretly (or at least stealthily) put people in with bots. Marvel Snap was really successful at emulating opponents at low ranks and gradually increasing real opponent density the higher you are. Battle Royale games with 100 players per game can easily add a bunch of bots so you aren't at the bottom and can even win. I noticed Mario Kart World also has bots in most knockout matches (and I highly appreciate that it is transparent about this fact.)
There's also a ton of multi sale per person in overwatch. Especially before role queue existed, it was easier to just spend 10 bucks on a new account to learn a hero than to suffer ELO hell while doing it. People are so toxic in competitive shooters, and playing at the ELO of your best heroes while on a hero you don't even know the abilities of is very very unpleasant. I struggle to think of a person I played with that didn't have multiple accounts, some with as many as 5-10.
This is to say nothing of the rampant cheating in the game, which if a person ever gets banned for, there is nothing stopping them from just spending 10$ on a replacement account.
It’s rare for any product to have more success in later invocations than the first edition, that is where the narrative is fresh and strong- and even in the event sequels are stronger, they tend to increase sales of the first season/movie/etc; because people want the whole experience.
Video games I feel like reverse this general trend, though. Unless they have a major story component (and sometimes even if they do) many games get iteratively 'better' (better for the purposes of making sales if not of making original fans happy) for various reasons: improvements to the core game loop, polish that makes the game more appealing to new audiences, and most importantly graphics.
Story-based content is what struggles with sequels because it's really hard to both capture the feeling of the original sufficiently to satisfy existing fans while also telling a new story that's interesting in its own right. Being derivative without being too derivative.
I remember in the 80s/90s when it seemed every movie sequel sucked. Just cashed in, and not really planned for from the beginning.
I don't think it's ever really been true that video game sequels sucked. Maybe Zelda 2 and to a lesser extent Mario 2 - but game developers seem to break new ground on sequels a lot. In fact I think sequels have been better than originals more often than not throughout game history.
For one thing it may just be more common for the first to not reach its full audience.
But my experience as a game developer is also that, when you start out making a new game, you probably kinda suck at making that game. Games sometimes suck for most of their development until they suddenly get good near the end.
And by the end, you get really good at making that specific game. A lot of game design has to come together to enlighten further game design decisions, and you really come to know what's fun by the end of it. Not to mention the technology you build for it!
A lot of game development is trying to find an idea that hits. When developing a new game, there are a lot of unknowns, budgets are tight, a lot of compromises are made, and often there are plenty of rough edges.
A sequel allows the same team to build on the shoulders of the first game, keeping what worked, adding features that players missed and refining those that didn’t work. It’s seen as a safer investment, with an existing fan base to leverage, and so this often leads to larger development and marketing budgets with a focus on growth.
There’s no fucking way they would have spent so much money on three WH titles and a fuckton of DLC to make an absolutely colossal RTS game if it was niche. Total War Warhammer single handedly saved WH Fantasy with how well it sold.
PC games are relatively niche compared to console gaming for one. And RTSes tend to be heavily PC biased from the control schemes. Although these days you should be able to keyboard and mouse to a console due to being USB or Bluetooth connected anyway, using it on a couch without a desk would be awkward.
I find this very disingenuous because the person you replied to was talking only about Iran, and stating that Iran is a theocracy in their opinion. They never mentioned anything about Iran, let alone stating that Israel isn't a theocracy.
So asking this question, this way, is quite strange in my opinion.