I strongly don't think they are, because the ability to be invasive to the user with a native app is much higher. There is also a stronger financial incentive to do so since payments are easy.
And that's before we consider the much stronger user control presented by the open web. I can run an extension like uMatrix and take back control of my browser. On mobile now I can't even proxy and inspect the network requests that the apps are making without resorting to insane hackery tricks.
The more these things evolve, The more against native apps I am becoming.
Importantly, I think it's much more obvious what you're doing with a web app when you upload data. There's an erroneous belief when you're using native app that the data you provide to it never leaves the device. That might be the case, but even in cases where the native app isn't just a shim to do something through a service, there's little guarantee they aren't utilizing your data for their own purposes, legally (e.g. Adobe) or not.
This isn't unique to mobile vs desktop, but from my experience people use those different device types with different levels of care. It's possible app stores play into this by giving people an incorrect sense of security about aspects of application usage and updating that they don't actually provide.
There is a cost to a centralized app store that I never hear anybody talk about, which is that due to the perception of safety, it becomes a very juicy target for anybody that wants to distribute malware (or even just exploitative apps that e.g. charge $5 a week for a flashlight). If you can get over the wall, then you get access to a very lucrative market.
My personal hypothesis is this is the reason that app stores are filled with so much trash. The app store provides a mechanism of discoverability that would otherwise never be available to such apps.
And this then leads to what you're talking about, which is the stores actually feel less safe than the open web.
> The study links evolutionary neuroscience with neurodevelopmental disease, suggesting that the unusually high incidence of autism in humans might be a byproduct of selection shaping our brains.
> It suggests that key neuron types in the human brain are subject to particularly strong evolutionary pressures, especially in their regulatory landscapes.
> If valid, it opens a new lens through which to think about neurodiversity: certain vulnerabilities might be inextricable from the very changes that made human cognition distinctive
I've never heard of this definition of reverse engineering -- when one has the unobfuscated actual source code I'd usually call it: reading the code, or something like summarization.
Not trying to be uncharitable, I found your article informative. Reverse engineering has historically been reserved for cases where there is an adversial aspect, as in binaries or server APIs. Anyhow, Cheers and thank you, sincerely.
Having the source code and understanding how it works is two different things, especially when running on state of the art hardware. If I had just read the source I would not have gained as much knowledge as this article taught me. Where did this extra info come from? They read the source too, but then they did something more. I wouldn’t call it summarization either, as again any summary I wrote about the code would pale in comparison.
That is the traditional explanation of why it is called reverse engineering. The term originated in hardware engineering. When it was originally applied to software, it was common to create requirements documents and design documents before coding, even if the actual process did not strictly follow the "waterfall" idea.
Thus it was natural to call the process of producing design documents from undocumented software "reverse engineering". These days coding without any formal design documents is so common that it seems the original meaning of reverse engineering has become obscured.
What time period and area did you come across this usage? As I ever saw it used, 'reverse engineering' generally referred to creating docs from executables or watching network protocols rather than from source.
Back in the 1990's. As an example, back then the Rational Rose design software had a feature to generate UML diagrams from existing source code, and it was called "reverse engineering".
You guys are being obtuse. Engineering is turning a spec into a more technical artifact, whether that's source code, machine code, physical hardware or something else. Reverse engineering is then reserving the process of engineering, recovering the semantic artifact from the engineering artifact. That the OP is using the term in the sense of recovering the semantic insights from the cuda kernels is a fine application of the concept.
> In extreme cases, on purely CPU bound benchmarks, we’re seeing a jump from < 1Gbit/s to 4 Gbit/s. Looking at CPU flamegraphs, the majority of CPU time is now spent in I/O system calls and cryptography code.
400% increase in throughput, which should translate to a proportionate reduction in CPU utilization for UDP network activity. That's pretty cool, especially for better power efficiency on portable clients (mobile and notebook).
I found this presentation refreshing. Too often, claims about transition to "modern" stacks are treated as being inherently good and do not come with the data to back it up.
4 Gbit/s is on our rather dated benchmark machines. If you run the below command on a modern laptop, you likely reach higher throughput. (Consider disabling PMTUD to use a realistic Internet-like MTU. We do the same on our benchmark machines.)
Shared ring buffers for IO exist in Linux, I don't think we'll ever see it extend to DMA for the NIC due to the rearchitecture of security required. However if the NIC is smart enough and the rules simple maybe.
There are systems that move the NIC control to user space entirely. For example Snabb has an Intel 10g Ethernet controller driver that appears to use a ring buffer on DMA memory.
RDMA offers that. The NIC can directly access user space buffers. It does require that the buffers are “registered” first but applications usually aim to do that once up front.
sure, but what about some kind of generalized cross-context ipc primitive towards a zero copy messaging mechanism for high performance multiprocessing microkernels?
A wheel is generous. This seems more like inviting the computing equivalent of spilling twenty thousand tons of crude into the sea, which then promptly catch fire.
I agree that most scenarios are not going to be so perilous. However, GOTO FAIL[1] would fit your C99 foot gun description neatly, with repercussions that would approach the peril of my metaphor within at least an order of magnitude.
reply