Hacker Newsnew | past | comments | ask | show | jobs | submit | metadat's commentslogin

Are Google and Apple's stores safer than the open web? It really doesn't seem like it, in a lot of ways.

I strongly don't think they are, because the ability to be invasive to the user with a native app is much higher. There is also a stronger financial incentive to do so since payments are easy.

And that's before we consider the much stronger user control presented by the open web. I can run an extension like uMatrix and take back control of my browser. On mobile now I can't even proxy and inspect the network requests that the apps are making without resorting to insane hackery tricks.

The more these things evolve, The more against native apps I am becoming.


Importantly, I think it's much more obvious what you're doing with a web app when you upload data. There's an erroneous belief when you're using native app that the data you provide to it never leaves the device. That might be the case, but even in cases where the native app isn't just a shim to do something through a service, there's little guarantee they aren't utilizing your data for their own purposes, legally (e.g. Adobe) or not.

This isn't unique to mobile vs desktop, but from my experience people use those different device types with different levels of care. It's possible app stores play into this by giving people an incorrect sense of security about aspects of application usage and updating that they don't actually provide.


There is a cost to a centralized app store that I never hear anybody talk about, which is that due to the perception of safety, it becomes a very juicy target for anybody that wants to distribute malware (or even just exploitative apps that e.g. charge $5 a week for a flashlight). If you can get over the wall, then you get access to a very lucrative market.

My personal hypothesis is this is the reason that app stores are filled with so much trash. The app store provides a mechanism of discoverability that would otherwise never be available to such apps.

And this then leads to what you're talking about, which is the stores actually feel less safe than the open web.


It is a long paper, so not immediately obvious.

> The study links evolutionary neuroscience with neurodevelopmental disease, suggesting that the unusually high incidence of autism in humans might be a byproduct of selection shaping our brains.

> It suggests that key neuron types in the human brain are subject to particularly strong evolutionary pressures, especially in their regulatory landscapes.

> If valid, it opens a new lens through which to think about neurodiversity: certain vulnerabilities might be inextricable from the very changes that made human cognition distinctive


Thanks! Macro-expanded:

PostgreSQL 18 Released https://news.ycombinator.com/item?id=45372283 - 3 days ago, 21 comments


I've never heard of this definition of reverse engineering -- when one has the unobfuscated actual source code I'd usually call it: reading the code, or something like summarization.

Not trying to be uncharitable, I found your article informative. Reverse engineering has historically been reserved for cases where there is an adversial aspect, as in binaries or server APIs. Anyhow, Cheers and thank you, sincerely.


Having the source code and understanding how it works is two different things, especially when running on state of the art hardware. If I had just read the source I would not have gained as much knowledge as this article taught me. Where did this extra info come from? They read the source too, but then they did something more. I wouldn’t call it summarization either, as again any summary I wrote about the code would pale in comparison.

I think "explained" is a reasonable term for this. If I remember correctly there where books of the form "The Linux Source Code Explained".

Certainly I can't get on board with reverse engineered.


That is the traditional explanation of why it is called reverse engineering. The term originated in hardware engineering. When it was originally applied to software, it was common to create requirements documents and design documents before coding, even if the actual process did not strictly follow the "waterfall" idea.

Thus it was natural to call the process of producing design documents from undocumented software "reverse engineering". These days coding without any formal design documents is so common that it seems the original meaning of reverse engineering has become obscured.


What time period and area did you come across this usage? As I ever saw it used, 'reverse engineering' generally referred to creating docs from executables or watching network protocols rather than from source.

Back in the 1990's. As an example, back then the Rational Rose design software had a feature to generate UML diagrams from existing source code, and it was called "reverse engineering".

https://en.wikipedia.org/wiki/IBM_Rational_Rose


That time when I reverse engineered JRR Tolkien‘s Lord of the rings from symbols engraved on dead trees. Took me three summers…

it’s more properly just software archaeology; recovering design intent from artifacts https://en.m.wikipedia.org/wiki/Software_archaeology

You've never had to reverse engineer the thinking and ideas that went behind code written by someone else/you a year ago?

No, because so far you "engineered" nothing. You just studied it, tried to understand it, and explain or teach it.

If you had reverse engineered it, you would have tried to "recreate something" that does not exist to do the same.

So, if you have a binary code, you recreate the source code that in theory could allow you to recreate the binary.

If you have the source code, I guess that would be when you are missing pieces of info that allows you to run this code like it is done by others...


Disagree that reverse engineering necessarily requires something to be recreated.

For example, simple hardware reversing can just be learning what, how and why something works, you don't need to "recreate" anything other than ideas.


You guys are being obtuse. Engineering is turning a spec into a more technical artifact, whether that's source code, machine code, physical hardware or something else. Reverse engineering is then reserving the process of engineering, recovering the semantic artifact from the engineering artifact. That the OP is using the term in the sense of recovering the semantic insights from the cuda kernels is a fine application of the concept.

Thanks! Macro-expanded:

Raspberry Pi 500+ https://news.ycombinator.com/item?id=45370304 - 1 day ago, 285 comments


The key takeaway is hidden in the middle:

> In extreme cases, on purely CPU bound benchmarks, we’re seeing a jump from < 1Gbit/s to 4 Gbit/s. Looking at CPU flamegraphs, the majority of CPU time is now spent in I/O system calls and cryptography code.

400% increase in throughput, which should translate to a proportionate reduction in CPU utilization for UDP network activity. That's pretty cool, especially for better power efficiency on portable clients (mobile and notebook).

I found this presentation refreshing. Too often, claims about transition to "modern" stacks are treated as being inherently good and do not come with the data to back it up.


Any guesses on whether they have other cases where they get more than 4 Gbps but wasn't CPU bound or was this the fastest they got?

_Author here_.

4 Gbit/s is on our rather dated benchmark machines. If you run the below command on a modern laptop, you likely reach higher throughput. (Consider disabling PMTUD to use a realistic Internet-like MTU. We do the same on our benchmark machines.)

https://github.com/mozilla/neqo

cargo bench --features bench --bench main -- "Download"


i wonder if we'll ever see hardware accelerated cross-context message passing for user and system programs.

Shared ring buffers for IO exist in Linux, I don't think we'll ever see it extend to DMA for the NIC due to the rearchitecture of security required. However if the NIC is smart enough and the rules simple maybe.

There are systems that move the NIC control to user space entirely. For example Snabb has an Intel 10g Ethernet controller driver that appears to use a ring buffer on DMA memory.

https://github.com/snabbco/snabb/blob/master/src/apps/intel/...


"(You could think of it as a [busybox](https://en.wikipedia.org/wiki/BusyBox#Single_binary) for networking.)"

They sugggest thinking of busybox

But if using busybox, their Makefile will fail

Using toybox instead will work


RDMA offers that. The NIC can directly access user space buffers. It does require that the buffers are “registered” first but applications usually aim to do that once up front.

There is AMD's onload https://github.com/Xilinx-CNS/onload. It works with Solarflare, Xilinx but also generic NIC support via AF_XDP.

The price of doing that is losing OS controls over emitted packets. For servers fine. Browsers not so much.

sure, but what about some kind of generalized cross-context ipc primitive towards a zero copy messaging mechanism for high performance multiprocessing microkernels?

Can it be "Sny" or "Slony" (Simpsons nod)? Keep a fragment in, please, for discoverabilities sake!

Just list the devices it has been tested with in the readme.

I know a genuine Magnetbox, Panaphonic, and Sorny when I see one.

I guess we will find out!

Is this different from the KiTTY variant of PuTTY?

https://www.9bis.net/kitty/index.html

I'm just confused now, haha. What's in a name, anyways...



Yes, totally useful compared to default git base commands.

And also - melding the "changed twice" (or thrice...) mutations into a single commit is a brilliant isolation of a subtle common pattern.


git-absorb does exist [1]. It seems to be inspired by a mercurial subcommand of the same name. It's also available in most distro repos.

[1] https://github.com/tummychow/git-absorb


IMHO it should be a compiler error. This is just so loose... a wheel fell off.

A wheel is generous. This seems more like inviting the computing equivalent of spilling twenty thousand tons of crude into the sea, which then promptly catch fire.

Eh it’s about the same level of footgun you might see in C99. It’s not great but you’re being hyperbolic if you ask me.

I agree that most scenarios are not going to be so perilous. However, GOTO FAIL[1] would fit your C99 foot gun description neatly, with repercussions that would approach the peril of my metaphor within at least an order of magnitude.

https://en.wikipedia.org/wiki/Unreachable_code#goto_fail_bug


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: