Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

Most likely it does what their other apps do: opens URLs in an in-app "browser" WebView, which is then injected with a ton of trackers that have unlimited access to everything you browse in their app.

iOS apps are allowed to add arbitrary JavaScript to any page on any domain, even HTTPS, as long as it's a WebView and not the standalone Safari app.


This is generally worse UX vs. just opening Safari. There have been exactly zero times where I was happy that a link opened in an app's WebView, instead of in Safari or the appropriate external app.

Why does a seemingly privacy-focused Apple create the compromisable WebView system for apps? Is there some weird edge case for apps that they need this, for a non-evil reason?


There is SCSafariController, and even Android has CustomTabs API for private in-app browsers. It's just very inconvenient for Meta/Facebook.

WebView is very useful for UIs. You're probably using it more than you know in the "native" apps.


I’ve never worked on iOS apps before, but after writing my comment I looked into it. Yes, I absolutely use WebView all the time without knowing it.

Still, would be cool if I had a setting for each app that allows forcing opening 3rd party URLs in Safari, and not WebView, if that is feasible.


They don’t allow third party browser engines. If they didn’t allow web view they are effectively banning third party browsers completely. I can’t imagine that would make their anti trust problems any better.


That makes sense. Thanks.

Although, it does seem like they could get more granular in app approval, which I am sure iOS devs would not like, but users would. For example, "If your app's primary use case is navigation of the open web, you may use WebView to handle 3rd party links. However, if that is not the primary purpose of your app, web links must open in iOS."

Either that, or give me a setting for each app, which the dev can set the default on. "Open links in Safari."


There’s a permission for Location at least, “In App Web Browsing” can have that permission disabled. Web Views don’t seem to have similar treatment otherwise, afaict. I’d sandbox them aggressively if I could .

I use Adguard which has a Safari integration that appears to apply to Web Views (based on the absence of ads), though I don’t have proof of that.


> I use Adguard which has a Safari integration that appears to apply to Web Views (based on the absence of ads), though I don’t have proof of that.

I have been wondering about this for a couple years now. Do Safari content filters apply to app WebViews? I assumed not.

Can any iOS dev chime in? I don't have have a modern Mac and dev account to test this at this time.


Well, just off the top of my head, an epub is basically HTML and is simple to implement with a web view. Nice when the OS has a framework that provides one.


There's a harmless "vulnerability" that some automated scanners keep finding on my website. I've deliberately left it "unfixed", and block everyone who emails me about it.


I've asked the 32b model to edit a TypeScript file of a web service, and while "thinking" it decided to write me a word counter in Python instead.


It would also be nice if the update archive wasn't 250MB. Sparkle framework supports delta updates, which can cut down the traffic considerably.


This is an electron app.


You can still get delta updates with Sparkle in an electron app. I am using it, and liking it a lot more than Electron Updater so far: https://www.hydraulic.dev


Which is even better for incremental updates.

If just some JavaScript files change, you don't need to redownload the entire Chromium blob.


which is their design choice, not an obligation.

Electron really messed up a few things in this world


I'm shocked that Beat Saber is written in C# & Unity. That's probably the most timing sensitive game in the world, and they've somehow pulled it off.


GC isn't something to be afraid of, it's a tool like any other tool. It can be used well or poorly. The defaults are just that - defaults. If I was going to write a rhythm game in Unity, I would use some of the options to control when GC happens [0], and play around with the idea of running a GC before and after a song but having it disabled during the actual interactive part (as an example).

[0] https://docs.unity3d.com/6000.0/Documentation/Manual/perform...


There's another highly sensitive to timing game - Osu!, which is written in C# too (on top of custom engine).


Devil May Cry for the Playstation 5 is written in C#, but not Unity.

Capcom has their own fork of .NET.

"RE:2023 C# 8.0 / .NET Support for Game Code, and the Future"

https://www.youtube.com/watch?v=tDUY90yIC7U


In absolute terms yes, but relative to the CPU speed memory is ridiculously slow.

Quake struggled with the number of objects even in its days. What you've got in the game was already close to the maximum it could handle. Explosions spawning giblets could make it slow down to a crawl, and hit limits of the client<>server protocol.

The hardware got faster, but users' expectations have increased too. Quake 1 updated the world state at 10 ticks per second.


> Quake struggled with the number of objects even in its days.

Because of memory bandwidth of Iterating the entities? No way. Every other part - rendering, culling, network updates, etc is far worse.

Let’s restate. In 1998 this got you 1024 entities at 60 FPS. The entire array could no fit in L2 cache of a modern desktop.

And I already advised a simple change to improve memory layout.

> Quake 1 updated the world state at 10 ticks per secondo

That’s not a constraint in Quake 3 - which has the same architecture. So it’s not relevant.

> users' expectations have increased too

Your game is more complex than quake 3? In what regard?


> relative to the CPU speed memory is ridiculously slow

Latency from cpu to memory sure.

Memory frequency has caught up though, so you have have more bandwidth than any CPU can deal with.


Indeed there are people who want to make games, and there are people who think they want to make games, but want to make game engines (I'm speaking from experience, having both shipped games and keeping a junk drawer of unreleased game engines).

Shipping a playable game involves so so many things beyond enjoyable programming bits that it's an entirely different challenge.

I think it's telling that there are more Rust game engines than games written in Rust.


This does not apply just to games, but to most any application designed to be used by human beings, particularly complete strangers.

Typically the “itch is scratched” long before the application is done.


I'm in that camp. After shifting from commercial gamedev I've been itching to build something. I kept thinking "I wanna build a game" but couldn't really think what that came is. Then realised "Actually it's because I want to build an engine" haha


There are alternative universes where these wouldn't be a problem.

For example, if we didn't settle on executing compiled machine code exactly as-is, and had a instruction-updating pass (less involved than a full VM byte code compilation), then we could adjust SIMD width for existing binaries instead of waiting decades for a new baseline or multiversioning faff.

Another interesting alternative is SIMT. Instead of having a handful of special-case instructions combined with heavyweight software-switched threads, we could have had every instruction SIMDified. It requires structuring programs differently, but getting max performance out of current CPUs already requires SIMD + multicore + predictable branching, so we're doing it anyway, just in a roundabout way.


> if we didn't settle on executing compiled machine code exactly as-is, and had a instruction-updating pass (less involved than a full VM byte code compilation)

Apple tried something like this: they collected the LLVM bitcode of apps so that they could recompile and even port to a different architecture. To my knowledge, this was done exactly once (watchOS armv7->AArch64) and deprecated afterwards. Retargeting at this level is inherently difficult (different ABIs, target-specific instructions, intrinsics, etc.). For the same target with a larger feature set, the problems are smaller, but so are the gains -- better SIMD usage would only come from the auto-vectorizer and a better instruction selector that uses different instructions. The expectable gains, however, are low for typical applications and for math-heavy programs, using optimized libraries or simply recompiling is easier.

WebAssembly is a higher-level, more portable bytecode, but performance levels are quite a bit behind natively compiled code.


> Another interesting alternative is SIMT. Instead of having a handful of special-case instructions combined with heavyweight software-switched threads, we could have had every instruction SIMDified. It requires structuring programs differently, but getting max performance out of current CPUs already requires SIMD + multicore + predictable branching, so we're doing it anyway, just in a roundabout way.

Is that not where we're already going with the GPGPU trend? The big catch with GPU programming is that many useful routines are irreducibly very branchy (or at least, to an extent that removing branches slows them down unacceptably), and every divergent branch throws out a huge chunk of the GPU's performance. So you retain a traditional CPU to run all your branchy code, but you run into memory-bandwidth woes between the CPU and GPU.

It's generally the exception instead of the rule when you have a big block of data elements upfront that can all be handled uniformly with no branching. These usually have to do with graphics, physical simulation, etc., which is why the SIMT model was popularized by GPUs.


Fun fact which I'm 50%(?) sure of: a single branch divergence for integer instructions on current nvidia GPUs won't hurt perf, because there are only 16 int32 lanes anyway.


CPUs are not good at branchy code either. Branch mispredictions cause costly pipeline stalls, so you have to make branches either predictable or use conditional moves. Trivially predictable branches are fast — but so are non-diverging warps on GPUs. Conditional moves and masked SIMD work pretty much exactly like on a GPU.

Even if you have a branchy divide-and-conquer problem ideal for diverging threads, you'll get hit by a relatively high overhead of distributing work across threads, false sharing, and stalls from cache misses.

My hot take is that GPUs will get more features to work better on traditionally-CPU-problems (e.g. AMD Shader Call proposal that helps processing unbalanced tree-structured data), and CPUs will be downgraded to being just a coprocessor for bootstrapping the GPU drivers.


> There are alternative universes where these wouldn't be a problem

Do people that say these things have literally any experience of merit?

> For example, if we didn't settle on executing compiled machine code exactly as-is, and had a instruction-updating pass

You do understand that at the end of the day, hardware is hard (fixed) and software is soft (malleable) right? There will be always be friction at some boundary - it doesn't matter where you hide the rigidity of a literal rock, you eventually reach a point where you cannot reconfigure something that you would like to. And also the parts of that rock that are useful are extremely expensive (so no one is adding instruction-updating pass silicon just because it would be convenient). That's just physics - the rock is very small but fully baked.

> we could have had every instruction SIMDified

Tell me you don't program GPUs without telling me. Not only is SIMT a literal lie today (cf warp level primitives), there is absolutely no reason to SIMDify all instructions (and you better be a wise user of your scalar registers and scalar instructions if you want fast GPU code).

I wish people would just realize there's no grand paradigm shift that's coming that will save them from the difficult work of actually learning how the device works in order to be able to use it efficiently.


The point of updating the instructions isn't to have optimal behavior in all cases, or to reconfigure programs for wildly different hardware, but to be able to easily target contemporary hardware, without having to wait for the oldest hardware to die out first to be able to target a less outdated baseline without conditional dispatch.

Users are much more forgiving about software that runs a bit slower than software that doesn't run at all. ~95% of x86_64 CPUs have AVX2 support, but compiling binaries to unconditionally rely on it makes the remaining users complain. If it was merely slower on potato hardware, it'd be an easier tradeoff to make.

This is the norm on GPUs thanks to shader recompilation (they're far from optimal for all hardware, but at least get to use the instruction set of the HW they're running on, instead of being limited to the lowest common denominator). On CPUs it's happening in limited cases: Zen 3 added AVX-512 by executing two 256-bit operations serially, and plenty of less critical instructions are emulated in microcode, but it's done by the hardware, because our software isn't set up for that.

Compilers already need to make assumptions about pipeline widths and instruction latencies, so the code is tuned for specific CPU vendors/generations anyway, and that doesn't get updated. Less explicitly, optimized code also makes assumptions about cache sizes and compute vs memory trade-offs. Code may need L1 cache of certain size to work best, but it still runs on CPUs with a too-small L1 cache, just slower. Imagine how annoying it would be if your code couldn't take advantage of a larger L1 cache without crashing on older CPUs. That's where CPUs are with SIMD.


i have no idea what you're saying - i'm well aware that compilers do lots of things but this sentence in your original comment

> compiled machine code exactly as-is, and had a instruction-updating pass

implies there should be silicon that implements the instruction-updating - what else would be "executing" compiled machine code other than the machine itself...........


I was talking about a software pass. Currently, the machine code stored in executables (such as ELF or PE) is only slightly patched by the dynamic linker, and then expected to be directly executable by the CPU. The code in the file has to be already compatible with the target CPU, otherwise you hit illegal instructions. This is a simplistic approach, dating back to when running executables was just a matter of loading them into RAM and jumping to their start (old a.out or DOS COM).

What I'm suggesting is adding a translation/fixup step after loading a binary, before the code is executed, to make it more tolerant to hardware changes. It doesn’t have to be full abstract portable bytecode compilation, and not even as involved as PTX to SASS, but more like a peephole optimizer for the same OS on the same general CPU architecture. For example, on a pre-AVX2 x86_64 CPU, the OS could scan for AVX2 instructions and patch them to do equivalent work using SSE or scalar instructions. There are implementation and compatibility issues that make it tricky, but fundamentally it should be possible. Wilder things like x86_64 to aarch64 translation have been done, so let's do it for x86_64-v4 to x86_64-v1 too.


that's certainly more reasonable so i'm sorry for being so flippant. but even this idea i wager the juice is not worth the squeeze outside of stuff like Rosetta as you alluded, where the value was extremely high (retaining x86 customers).


hm. Doesn't the existence of Vulkan subgroups and CUDA shuffle/ballot poke huge holes in their 'SIMT' model? From where I sit, that looks a lot like SIMD. The only difference seems to be that SIMT professes to hide (or use HW support for) divergence. Apart from that, reductions and shuffles are basically SIMD.


I wonder if we get some malicious workarounds for this, like re-releasing the same phone under different SKUs to pretend they were different models on sale for a short time.


Maybe, but the regulation tries to prevent this by separating "models" from "batches" from "individual items" and defaults to "model" when determining compatibility. Worth noting is that each new model requires a separate filing for both EcoDesign and other certifications like CE which could help reduce workarounds like model number inflation.


We commonly use hardware like LCDs and printers that render a sharp transition between pixels without the Gibbs' phenomenon. CRT scanlines were close to an actual 1D signal (but not directly controlled by the pixels, which the video cards still tried to make square-ish), but AFAIK we've never had a display that is a 2D signal that we assume in image processing.

In signal processing you have a finite number of samples of an infinitely precise contiguous signal, but in image processing you have a discrete representation mapped to a discrete output. It's contiguous only when you choose to model it that way. Discrete → contiguous → discrete conversion is a useful tool in some cases, but it's not the whole story.

There are images designed for very specific hardware, like sprites for CRT monitors, or font glyphs rendered for LCD subpixels. More generally, nearly all bitmap graphics assumes that pixel alignment is meaningful (and that has been true even in the CRT era before the pixel grid could be aligned with the display's subpixels). Boxes and line widths, especially in GUIs, tend to be designed for integer multiples of pixels. Fonts have/had hinting for aligning to the pixel grid.

Lack of grid alignment, an equivalent of a phase shift that wouldn't matter in pure signal processing, is visually quite noticeable at resolutions where the hardware pixels are little squares to the naked eye.


I think you are saying there are other kinds of displays which are not typical monitors and those displays show different kinds of images - and I don’t disagree.


I'm saying "digital images" are captured by and created for hardware that has the "little squares". This defines what their pixels really are. Pixels in these digital images actually represent discrete units, and not infinitesimal samples of waveforms.

Since the pixels never were a waveform, never were sampled from such signal (even light in camera sensors isn't sampled along these axis), and don't get displayed as a 2D waveform, the pixels-as-points model from the article at the top of this thread is just an arbitrary abstract model, but it's not an accurate representation of what pixels are.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: