While I agree with some of it, I feel like there's a big gotcha here that isn't addressed.
Having 1 single wide event, at the end of a request, means that if something unexpected happens in the middle (stack overflow, some bug that throws an error that bypasses your logging system, lambda times out etc...) you don't get any visibility into what happens.
You also most likely lose out on a lot of logging frameworks your language has that your dependencies might use.
I would say this is a good layer to put on top of your regular logs. Make sure you have a request/session wide id and aggregate all those in your clickhouse or whatever into a single "log".
The way I have solved for this in my own framework in PHP is by having a Logging class with the following interface
interface LoggerInterface {
// calls $this->system(LEVEL_ERROR, ...);
public function exception(Throwable $e): void;
// Typical system logs
public function system(string $level, string $message, ?string $category = null, mixed ...$extra): void;
// User specific logs that can be seen in the user's "my history"
public function log(string $event, int|string|null $user_id = null, ?string $category = null, ?string $message = null, mixed ...$extra): void;
}
I also have a global exception handler that is registered at application bootstrap time that takes any exception that happens at runtime and runs $logger->exception($e);
There is obviously a tiny bit more of boilerplating to this to ensure reliability, but it works so well that I can't live without it anymore. The logs are then inserted into a wide DB table with all the field one could ever want to examine thanks to the variadic parameter.
I used to do it like that and it worked really well but I changed the flow to where exceptions are actually part of the control flow of the app using PHP's set_exception_handler(), set_error_handler() and register_shutdown_function().
Example, lets say a user forgot to provide a password when authenticating, then I will throw a ClientSideException(400, "need password yada yada");
That exception will bubble up to the exception_handler that logs and outputs the proper message to the screen. Similarly if ANY exception is thrown, regardless of where it originated, the same will happen.
When you embrace exceptions as control flow rather than try to avoid them, suddenly everything gets 10x easier and I end up writing much less code overall.
I love Exceptions as control flow! Thanks for the suggestion.
I too use specialized exceptions. Some have friendly messages that can be displayed to the user, like "Please fill the password". But critical exceptions display a generic error to the user ("Ops, sorry something went wrong on our side...") but log specifics to devs, like database connection errors, for example.
If that's an issue (visibility into middle layers) it just means your events aren't wide enough. There's no fundamental difference between log.error(data) and wide_event.attach(error, data), nor similar schemes using parameters rather that context/global-based state.
There are still use cases for one or the other strategy, but I'm not a fan of this explanation in either direction.
> If that's an issue (visibility into middle layers) it just means your events aren't wide enough.
I hate this kind of No-True-Scotsman handwaves for how a certain approach is supposed to solve my problems. "If brute-force search is not solving all your problems, it just means your EC2 servers are not beefy enough."
I gotta admit, I don't quite "get" TFA's point and the one issue that jumped out at me while reading it and your comment is that sooner than later your wide events just become fat, supposedly-still-human-readable JSON dumps.
I think a machine-parseable log-line format is still better than wide events, each line hopefully correlated with a request id though in practice I find that user id + time correlation isn't that bad either.
>> [TFA] Wide Event: A single, context-rich log event emitted per request per service. Instead of 13 log lines for one request, you emit 1 line with 50+ fields containing everything you might need to debug.
I am not convinced this is supposed to help the poor soul who has to debug an incident at 2AM. Take for example a function that has to watch out for a special kind of user (`isUserFoo`) where "special kind" is defined as a metric on five user attributes. I.e.,
Which immediately tells me that foo-ness is something I might want to pay attention to in this context.
With wide events, as I understand it, either you log the user in the wide event dump with attributes A to E (and potentially more!) or coalesce these into a boolean field `isUserFoo`. None of which tells me that foo-ness might be something that might be relevant in this context.
Multiply that with all the possible special-cases any logging unit might have to deal with. There's bar-ness which is also dependent on attributes A-E but with different logical connectives. There's baz-ness which is `isUserFoo(u) XOR (217828 < u.zCount < 3141592)`. The wide event is soooo context-rich I'm drowning.
Your objection, as I understand it, is some combination of "no true Scotsman" combined with complaints about wide events themselves.
To the first point (no true Scotsman), I really don't think that applies in the slightest. The post I'm replying to said (paraphrasing) that middle-layer observability is hard with wide events and easy with logs. My counter is that the objection has nothing to do with wide events vs logs, since in both scenarios you can choose to include or omit more information with the same syntactic (and similar runtime overhead) ease. I think that's qualitatively different from other NTS arguments like TDD, in that their complaint is "I don't have enough information if I don't send it somewhere" and my objection is just "have you tried sending it somewhere?" There isn't an infinite stream of counter-arguments about holding the tool wrong; there's the very dumbest suggestion a rubber duck might possibly provide about their particular complaint, which is fully and easily compatible with wide events in every incarnation I've seen.
To the second point (wide events aren't especially useful and/or suck), I think a proper argument for them is a bit more nuanced (and I agree that they aren't a panacea and aren't without their drawbacks). I'll devote the rest of my (hopefully brief) comment to this idea.
1. Your counter-example falls prey to the same flaw as the post I responded to. If you want information then just send that information somewhere. Wide events don't stop you from gathering data you care about. If you need a requestID then it likely exists in the event already. If you need a message then _hopefully_ that's reasonably encoded in your choice of sub-field, and if it's not then you're free to tack on that sort of metadata as well.
2. Your next objection is about the wide event being so context-rich as to be a problem in its own right. I'm sympathetic to that issue, but normal logging doesn't avoid it. It takes exactly one production issue where you can't tie together related events (or else can tie them together but only via hacks which sometimes merge unrelated events with similar strings) for you to realize that completely disjoint log lines aren't exactly a safe fallback. If you have so much context-dependent complexity that a wide event is hard to interpret then linear logs are going to be a pain in the ass as well.
Mildly addressing the _actual_ pros and cons: Logs and wide events are both capapable of transmitting the same information. One reasonable frame of reference is viewing wide events as "pre-joined" with a side helping of compiler enforcement of the structure.
It's trivially possible to produce two log lines in unrelated parts of a code base which no possible parser can disambiguate. That's not (usually) possible when you have some data specification (your wide event) mediating the madness.
It's sometimes possible with normal logs to join on things which matter (as in your requestID example), but it's always possible with wide events since the relevant joins are executed by construction (also substantially more cheaply than a post-hoc join). Moreover, when you have to sub-sample, wide events give an easy strategy for ensuring your joins work (you sub-sample at a wide event level rather than a log-line level) -- it's not required; I've worked on systems with a "log seed" or whatever which manage that joinability in the face of sub-sampling, but it's more likely to "just work" with wide events.
The real argument in favor of wide events, IMO, is that it encourages returning information a caller is likely to care about at every level of the stack. You don't just get potentially slightly better logs; you're able to leverage the information in better tests and other hooks into the system in question. Parsing logs for every little one-off task sucks, and systems designed to be treated that way tend to suck and be nearly impossible to modify as desired if you actually have to interact with "logs" programatically.
It's still just one design choice in a sea of other tradeoffs (and one I'm only half-heartledly pursuing at $WORK since we definitely have some constraints which are solved by neither wide events nor traditional logging), BUT:
1. My argument against some random person's choice of counter-argument is perfectly sound. Nothing they said depended on wide events in the slightest, which was my core complaint, and I'm very mildly offended that anyone capable of writing something as otherwise sane and structured as your response would think otherwise.
2. Wide events do have a purpose, and your response doesn't seem to recognize that point in the design space. TFA wasn't the most enjoyable thing I've ever read, but I don't think the core ideas were that opaque, and I don't think a moment's consideration of carry-on implications would be out of the question either. I could be very wrong about the requisite background to understand the article or something, but I'm surprised to see responses of any nature which engage with irrelevant minutea rather than the subset of core benefits TFA chose to highlight (and I'm even more surprised to see anything in favor of or against wide events given my stated position that I care more about the faulty argument against them than whether they're good or bad)..
I wonder if one might solve this by using an accumulator that merges objects as they are emitted based on some ID (i.e. request ID say) and then ether emits the object on normal execution or a global exception handler emits it on error...?
Having logs in the format "connection X:Y accepted at Z ns for http request XXX" and then a "connection X:Y closed at Z ns for http response XXX" is rather nice when debugging issues on slow systems.
Nice discovery and writeup. Let alone for a 16 yo!.
I've never heard an XSS vulnerability described as a supply-chain attack before though, usually that one is reserved for package managers malicious scripts or companies putting backdoors in hardware.
I think you can view it as supply chain as the supply chain is about attacking resources used to infiltrate downstream (or is it upstream? I get which direction I should think this flows).
As an end user you can't really mitigate this as the attack happens in the supply chain (Mintlify) and by the time it gets to you it is basically opaque. It's like getting a signed malicious binary. It looks good to you and the trust model (the browser's origin model) seems to indicate all is fine (like the signing on the binary). But because earlier in the supply chain they made a mistake, you are now at risk. Its basically moving an XSS up a level into the "supply chain".
This makes use of a vulnerability in a dependency. If they had recommended, suggested, or pushed this purposefully vulnerable code to the dependency, then waited for a downstream (such as Discord) to pull the update and run the vulnerable code, then they would have completed a supply chain attack
The whole title is bait. Nobody would have heard of the dependency, so they don't even mention it, just call it "a supply chain" and drop four big other names that you have heard of to make it sexy. One of them was actually involved that I can tell from the post, that one is somewhat defensible. They might as well have written in the title that they've hacked the pentagon, if someone in there uses X and X had this vulnerable dependency, without X or the pentagon ever being contacted or involved or attacked
It does attack the supply chain. It attacks the provider of documentation. It's an attack on the documentation supply chain.
It would be like if you could provide a Windows Update link that went to Windows Update, but you could specify Windows Update to retrieve files from some other share that the malicious actor had control of. It's the same thing, except rather than it being a binary rather it is documentation.
I think he might be misrepresenting it a bit, but from what I've seen every software company I know of heavily uses agentic AI to create code (except some highly regulated industries).
It has become a standard tool, in the same way that most developers code with an IDE, most developers use agentic AI to start a task (if not to finish it).
From the article:
"All 24 public study Montessori schools met basic Montessori criteria (SI Appendix, section 3A), but implementation varied widely. "
"The final implementation criteria for school inclusion were thus:
• At least 66% of the lead Primary classroom teachers are trained by one of the two most prominent Montessori
teacher training organizations, the Association Montessori Internationale (AMI) or the American Montessori
Society (AMS). One school was excluded on this basis.
• No more than two adults, the trained teacher and a non-teaching assistant, in the classroom on a regular basis.
No school was excluded on this basis.
• Classrooms are mixed-age, with at least 18 children ranging from 3 to 6 years old. Five schools did not mix
ages so were excluded.
• At least a 2-hour uninterrupted free choice period every day. Five schools were excluded on this basis.
• Each classroom has at least 80% of the complete set of roughly 150 Montessori Primary materials, and fewer
than 5% of the materials available to children in the classroom are not Montessori materials. No school was
excluded for failing to meet this criterion."
So seems like the criteria for this research is fairly good.
In general though it's hard to tell if a school is Montessori or not. The method is not trademarked and anyone can claim to be a Montessori school ,or Montessori inspired etc...
There are two organizations that certify - AMI, which was created by Maria Montessori's daughter and functions mostly in Europe, and AMS which is an American organization founded by people inspired by the Montessori method.
AMI is stricter while AMS is more modern, but most places that identify as Montessori is neither.
I would say the best way to identify if a school is Montessori is first if they have mixed-age classrooms, the standard is a 3 year class (so 1-3, 4-6, 7-9...).
If all the kids in a class are in the same age, it's not Montessori.
Unless it has a huge memory leak that isn't fixed for years and causes it to be virtually unusable for anyone it's probably not the Windows ME of Tablet OS's.
If leaking creation time is a concern, can we not just fake the timestamp?
We can do so in a way that most performance benefits remain - so like starting with a base time of 1970 and then adding base time to it intermittently, having random months and days to new records (or maybe based on the user's id - so the user's record are temporally consistent but they aren't with other user records).
I'm sure there might be a middle ground where most of the performance gains remain but the deanonymizing risk is greatly reduced.
Edit: encrypting the value in transit seems a simpler solution really
In that case, auto increments can also be bumped from time to time. And start from a billion.
They're more performant than uuidv7. Why would I still use UIID? Perhaps I would still want uuids because they can be generated in client and because they make incorrect JOINs return no rows.
Unit 8200 is the premier software development track in the Israeli military.
Every Israeli tech company likely has multiple developers from Unit 8200 in it. Whether it's building e-commerce shops or making video games.
While 8200 definitely falls under the military intelligence wing, I don't think describing people in it as Cyber Spies is anywhere near accurate. And unless that guy was very high ranking it is a stretch to imply that's an indication that IL military intelligence is involved in the company.
That is not to say that the military isn't involved with the company - that might very well be true, just that someone being from Unit 8200 isn't an indication of it.
People who don't live in countries with mandatory conscription for all don't really understand: everyone is connected to the military but it means nothing.
Judging an Israeli citizen on their IDF ties is like judging a US citizen on the fact that they went to public school.
> everyone is connected to the military but it means nothing.
No, people who live in tiny countries with mandatory conscription don't really understand that it means that their entire country is militarized. It's not surprising that fish can't see water.
> is like judging a US citizen on the fact that they went to public school.
It's exactly like that. If public school in the US trained people to kill and spy, it would be entirely safe to assume that the US was full of killer spies. For example, if you know that US public school taught a view of world history that was distorted in particular ways, and had very little emphasis in foreign languages, it would be safe to assume that Americans have a distorted view of the world, and largely don't speak foreign languages.
According to google, 87% of Americans go to a state-funded school, so yes judging an American based on the fact that they could afford to be in the top 13% and go to a public school instead is legitimate. This doesn't seem to match what you're trying to say.
You’re using the British definition of “public school” here, which is a “private school” in the US. US public schools are equivalent to UK state schools, in that both are run by the state.
It doesn't matter if it's accurate or not, such judgements are made by most people every day. Someone who was professionally formed somewhere has a higher probability of ties to them later on. Being intelligence services this might be even more true.
In today's political climate where people around the world see Israel judging (and sentencing, and carrying out the punishment) every Palestinian as terrorists, I think this wide brush of judging Israelis on their ties with the IDF is probably widely accepted as "only fair". When it comes to Unit 8200 the implications are even stronger.
But I don't get the US public school system reference. You have to start with a baseline and if you see a private Ivy League school on someone's CV and a random public school on someone else's I'm sure you'll probably make the obvious assumption about which one is better, even if sometimes the obvious is wrong.
An MCP server exposes tools that a model can call during a conversation and returns results according to the tool contracts. Those results can include extra metadata—such as inline HTML—that the Apps SDK uses to render rich UI components (widgets) alongside assistant messages.
If the connector is enabled by the prompt or via a UI interaction, it calls your MCP server. They have created some meta fields your tool can respond with, one of which is something about producing a widget along with a field for html.
In the current implementation, it makes an iframe (or webview on native) that loads a sandboxed environment which then gets another iframe with your html injected. Your html can include meta field whitelisted remote resources.
Load of bull.
Every article linked in this is either wrong or mischaracterized.
Cloudflare does not facilitate phising - it just made proxying and tunneling easier.
The breaches and bypasses mentioned are anything but - they are linking to a successful mitigation of an attack as if the attacker got away with something of value.
This entire article reeks of trying to fit the evidence to an agenda.
Considering they couldn't find actual evidence of problems and had to resort to mischaracterization this is actually a great reason to use Cloudflare.
I've reported blatant phishing attacks targeting seniors dozens of times to cloudflare (and so far it's always been cloudflare) and never once have they replied with anything except "we could not determine this was phishi g". They absolutely facilitate phishing through inaction.
Not my experience at all. We've reported hundreds if not thousands of sites and with few exceptions they have taken them down swiftly. Definitely one of the best cloud operators when it comes to this.
As recently as August 8th, I reported a phishing site targeting seniors into installing a pre-configured Atera client (who _also_ failed to respond in a reasonable time) by pretending to be an event invite. It was blatant and obvious phishing. This was the response:
---
Hello,
Cloudflare received your Phishing report regarding: ----
We are unable to process your report for the following reason(s):
We were unable to confirm phishing at the URL(s) provided.
Please be aware Cloudflare offers network service solutions including pass-through security services, a content distribution network (CDN) and registrar services. Due to the pass-through nature of our services, our IP addresses appear in WHOIS and DNS records for websites using Cloudflare. Cloudflare cannot remove material from the Internet that is hosted by others.
Please reply to this message, keeping the report identification number in the subject line intact, with the required information.
This is the typical response for me from Cloudflare - it took 2 more weeks before it was finally taken down. If I had to hazard a guess, your high volume of reports gets you into a very different support bucket than the occasional reporter.
My most recent experience was terrible for two reasons:
1. They didn't take down an obvious banking scam site that was hiding behind their service
2. They forwarded my "report phishing content" submission, including contact information, to the scammer, resulting in a roughly 100x increase in the amount of spam I receive and ensuring that I won't ever use their reporting function again
You also most likely lose out on a lot of logging frameworks your language has that your dependencies might use.
I would say this is a good layer to put on top of your regular logs. Make sure you have a request/session wide id and aggregate all those in your clickhouse or whatever into a single "log".
reply