Hacker Newsnew | past | comments | ask | show | jobs | submit | zerkten's commentslogin

Did a JS polyfill ever go anywhere? There is a comment on https://groups.google.com/a/chromium.org/g/blink-dev/c/zIg2K... which suggests that it might be possible, but a lot has changed. I suspect any effort died with continued availability after the first attempt to kill XSLT.

The sites sometimes want to provide some special formatting on top of the RSS without modifying it. For example, you might point people to available RSS readers which may not be installed or provide other directions to end users. RSS feeds are used in places other than reading apps. I've seen people suggest that this transformation could be done server-side, but that would modify the RSS feed which needs to be consumed.

This is probably one of the few things I think works better in an office environment. There was older equipment hanging around with space to set it up in a corner so people could sit down and just go. When mobile came along there would be sustainable lending program for devices.

With more people being remote, this either doesn't happen, or is much more limited. Support teams have to repro issues or walk through scenarios across web, iOS, and Android. Sometimes they only have their own device. Better places will have some kind of program to get them refurb devices. Most times though people have to move the customer to someone who has an iPhone or whatever.


You can nerf network performance in the browser devtools or underprovision a VM relatively easily on these machines. People sometimes choose not to and others are ignorant. Most of the time, it's just the case that they are dealing with too many things that are vague making it difficult to prioritize seemingly less important things.

A number of times I've had to have a framing discussion with a dev that eventually gets to me asking "what kind of computer do your (grand)parents use? How might X perform there" around some customer complaint. Other times, I've heard devs comment negatively after the holidays when they've tried their product on a family computer.


> Other times, I've heard devs comment negatively after the holidays when they've tried their product on a family computer.

I worked for a popular company and went to visit family during the winter holidays. I couldn't believe how many commercials there were for said company's hot consumer product (I haven't had cable or over-air television since well before streaming was a thing, so this was a new experience in the previous five years).

I concluded that if I had cable and didn't work for the company, I'd hate them due to the bajillion loud ads. My family didn't seem to notice. They tuned out all the commercials, as did a friend when I was at his place around a similar time

All it takes is a change in perspective to see something in an entirely new light.


I’ve never had TV, and have used ad blockers as long as they’ve been a thing. (Until 1⅓ years ago I even lived in a rural area where the closest billboard of any description was 40km away, and the second-closest 100km away.) On very odd occasions, I would get exposed to a television, and what I find uncomfortable at the best of times (notably: how do they cut so frequently!?) becomes a wretched experience as soon as it gets to ads, which it does with mindboggling frequency. I’m confident that if I tried actually watching that frenetic, oversaturated, noisy mess on someone’s gigantic, far-too-bright TV, I would be sick to the stomach and head within a few minutes.

More to the point; colour and font rendering are typically "perception" questions and very hard to measure in a deployed system without introducing a significant out of band element.

Network performance can be trivially measured in your users; and most latency/performance/bandwidth issues can be identified clearly.


Chrome devtools allow you to simulate low network and CPU performance, but I'm not aware of any setting that gives you pixelated text and washed-out colors. Maybe that will make a useful plugin, if you can accurately reproduce what Microsoft ClearType does at 96dpi!

Simulating low DPI displays is built in to Safari's dev tools, but it's not of much use, considering the different font rendering between the platforms.

I'm not a FOSS advocate, but I think that's a bit strong. I think it's more a case that they recognized the need for a good user experience, but that never hit a threshold which would move the needle for change to happen with the most popular FOSS. Darktable is probably one of the exceptions here.


I really like Darktable, and it's my go to photo editor, but the user interface really isn't intuitive on first look compared to something like Lightroom. The design choice that editing modules should be ordered by their place in the pixel pipeline is logical and sometimes useful, but it ends up with a lot of the controls being in rather weird places. The customisable quick controls palette would help, if it weren't that simple things like cropping can't be added to it (at least, last time I investigated this - perhaps it's changed now?)


I could have been clearer. I wouldn't say it's the paragon of photo editing, but it's further along in terms of usability. I've seen some normal people who don't want to pay the Adobe tax move to it.

An investigation of FOSS development would highlight a bunch of problems that exist to a lesser extent with other software development. When money is on the table and there is no motivation to keep supporting behaviors that particular contributors favor then feedback shift things. When you're building stuff for "yourself" then that feedback doesn't land the same even if the project owner has aspirations for better UX.


Darktable, to me, and multiple YouTubers who have looked at it...

... falls flat on it's face in the first impression by looking like an unresponsive window, due to the disorientingly light gray color design choices. I also just tried it and of course it's not notarized, meaning that it's almost impossible for anyone to install on macOS, unless they know of the secret button in System Settings. Nope, they aren't there yet.


> I also just tried it and of course it's not notarized, meaning that it's almost impossible for anyone to install on macOS, unless they know of the secret button in System Settings.

I don't understand why you're blaming the Darktable team for that when it's Apple that makes it nearly impossible for anyone to install a program written by someone who doesn't pay them $100/year.


What's the use case for this? It seems to be for situations where you might have a SaaS product, but there is some data required from a customer system. You'd expose the customer data using this relay and integrate into the SaaS. Is that the gist of it? Integration would still likely involve you giving the customer some software to expose a limited API and handle auth, logging, etc.


They are an alternative to the tailscale operated DERP servers, which are cloud relays.

Even with the much touted NAT punching capabilities of tailscale, there are numerous instances where tailscale cannot establish a true p2p connection. The last fallback is the quite slow DERP relay and from experience it gets used very often.

If you have a peer in your tailscale network that has a good connection and that maybe you can even expose to the internet with a port forward on your router, you now have this relay setting that you can enable to avoid using the congested/shared DERP servers. So there is not really a new use-case for this. It's the same, just faster.


The explanation that I think wasn't entirely clear in the post is how it actually works/why that's better.

From what I can tell, the situation is this:

1. You have a host behind NAT

2. That NAT will not allow you to open ports via e.g. uPnP (because it's a corporate firewall or something, for example) so other tailscale nodes cannot connect to it

3. You have another host which has the same configuration, so neither host can open ports for the other to connect in

The solution is to run a peer relay, which seems to be another (or an existing) tailscale node which both of these hosts can connect to via UDP, so in this circumstance it could be a third node you're already running or a new one you configure on a separate network.

When the two NAT'ed hosts can't connect to each other, they can both opt to connect instead to this peer node allowing them to communicate with each other via the peer node.

Previously this was done via Tailscale's hosted DERP nodes; these nodes would facilitate tailscale nodes to find each other but could also proxy traffic in this hard-NAT circumstance. Now you can use your own node to do so, which means you can position it somewhere that is more efficient for these two nodes to connect to and where you have control over the network, the bandwidth, the traffic, etc.


Is there a way to determine if a particular connection is falling back to DERP today?

I have a pretty basic setup with tailscale setup on an Apple TV behind a bunch of UniFi devices and occasionally tunnelled traffic is incredibly slow.

Wondering if it’s worth setting this up on my Plex server which is behind fewer devices and has a lot of unused network and cpu.


tailscale ping <node IP>

It will tell you how each ping has been answered until a direct connection is established.


Tailscale is a few things. It might be fair to say that it is mostly a software platform with a web frontend that allows orgs (and individual users alike) to easily create secure VPNs, so their various systems can have a secure, unfiltered virtual private network on which to communicate with eachother even if they're individually scattered across the four corners of the Internet.

The usual (traditional) way to do VPN stuff is/was hub-and-spoke: Each system connected to a central hub, and through that hub each system had access to the other systems.

But the way that Tailscale operates is different than that: Ideally, each connected system forms a direct UDP/IP connection with every other system on the VPN. There is no hub. In this way: If node A has data to send to node F, then it can send it directly there without traversing through a central hub.

And that's pretty cool -- this peer-to-peer arrangement is gloriously efficient compared to hub-and-spoke. (It's efficient enough that a person can get quite a lot done with Tailscale for free, with no payment expected ever.)

But we don't live in an ideal world. We instead often live in a world of NAT and firewalls -- sometimes even implemented by the ISPs themselves -- that can make it impossible for two nodes to directly send UDP packets to eachother. This results in unreachable nodes, which is not useful.

So Tailscale's workaround to that Internet problem is to provide Designated Encrypted Relays for Packets (DERP). DERP usually works, and end-to-end encryption is maintained.

DERP is also not at all new. It brings back some aspects of hub-and-spoke, but only for nodes that can't communicate directly; DERP behaves in a way akin to a hub, to help these crippled nodes by relaying traffic between them and the rest of the VPN's nodes.

But DERP is a Tailscale-hosted operation. And it can be pretty slow for some applications. And there was no way, previously, for an individual user to improve the performance of DERP: It simply was whatever it was -- with a collection of DERP servers chewing through bandwidth to provide connectivity for a world of badly-connected VPN nodes.

But today's announcement brings forth Tailscale Peer Relay.

> What's the use case for this?

The primary use case for this is simple: It is an alternative to DERP. A user can now provide their own relay service for their network's badly-connected peers to use. So now, rather than being limited to whatever bandwidth DERP has available, relaying can offer as much bandwidth as a user can afford to pay for and host themselves.

And if a user plans it right, then they can put their Peer Relay somewhere on the network where it can help minimize inter-node latency compared to DERP.

(It's not for everyone. Tailscale isn't for everyone, either -- not everyone needs a VPN at all. I'd never expect a random public customer to use it knowingly and directly.)


Yeah, Tailscale is really cool. The only thing I wish is that they didn't tie auth to either a big tech monopoly (Google, github etc) or running your own IDP service. I would love to use Tailscale for some self hosted stuff I have, but hesitate to start exposing something like an identity management tool because that's a high value target. And of course, I don't really want to let Google et al be in control of my VPN setup either.


That's a valid concern.

I've also used ZeroTier with good success.

They're a competitor that offers VPN with similar idealized P2P topology. Unlike Tailscale, ZT is not based on wireguard (ZT predates wireguard), but they do offer the option to use their own local auth without reliance/potential issues with yet-another party.

ZT also allows a person to create and use their own relay (called a "moon"), if that's something useful: https://rayriffy.com/garden/zerotier-moon

(For my own little purposes I don't really have a preference between either Zerotier or Tailscale.)


Thanks for the tip! I'll check that out and see if it would work for my VPN needs, but it certainly sounds promising.


They support Passkeys. This is exactly how I continue using them after moving away from Google Workspaces.


Oh wow, I had totally missed this[0]! Is it possible to migrate an existing SSO account (with associated tailnet) to a passkey one?

[0]: https://tailscale.com/blog/passkeys


The problem is that the data has to go somewhere. If you don't have the compute power locally, you have to send it to a server you control. At a point, this starts to break down because your attention to detail isn't sufficient to protect other operators. I think there are some happier mediums, but I wouldn't be as strident as saying there is no risk even if this is stored locally.


>> feeling of optimism and hope for the future.

I thought I was strange for feeling this when I brought my US-raised kids back to Northern Ireland this spring. Some would have been visible from my childhood home had they been built earlier. It made me think that maybe these people can get something right for the future.


For some more hope [1][2].

Times are tumultuous but potential exists all around us.

1. https://www.youtube.com/watch?v=g80av4zlDco

2. https://www.youtube.com/watch?v=jUVoWxvvJ5Y


There are a LOT of wind turbines in the US.


If they have to touch and go, how long would it take until they get the plane around for another approach? In fact, you might not get as far as that touch and go and have to go around. You need some margin for all of these eventualities. The likelihood is low that these happen, but they have to be accounted for.


Sure, but the flight was a lot longer than planed. How much extra do we need. They declared an emergency, and thus put themselves at the front of the line. They had 6 more minutes to do that touch and go around if that happened, and since they were already in a low fuel emergency they get priority and so there is enough time to do that if they needed. (edit - as others have noted, 6 minutes with high error bars, so they could have only had 30 seconds left which is not enough)

They landed safely, that is what is important. There is great cost to have extra fuel on board, you need enough, but it doesn't look to me like more was needed. Unless an investigation determines that this emergency would happen often on that route - even then it seems like they should have been told to land in France or someplace long before they got to their intended destination to discover landing was impossible.


> They had 6 more minutes to do that touch and go around if that happened

6 minutes is way out of the comfort zone. They might not have made it in that case.


Correct, article says they landed with 220kg which is around 6 minutes of average fuel burn over an entire flight - bit less at cruise, a hell of a lot more at takeoff/climb.

So I don't think 220kg is enough to do a go-around in a 737 (well, a go-around would've been initiated with a bit more than 220kg in the tank - they burned some taxing to the gate - but you get my point.) I've read around 2,300kg for takeoff and climb on a normal flight in a 737-8. A go-around is going to use close to that, it's a full power takeoff but a much shorter climb phase up to whatever procedure is set for the airport and then what ATC tells you.

I just flew 172s but even with those little things we were told, your reserve is never to be used.

These people came very, very close to a disaster. Fortunately they had as much luck left as they did fuel.


[flagged]


That’s about as useful as opening a fortune cookie and reading it off as an answer.

Straight from the horse’s mouth: https://web.archive.org/web/20230630013840/http://www.boeing...

In the first table they list 2307-2374 kg of fuel for takeoff and climb.


You’re talking to the wrong horse though.

Isn’t a 737-8 the max 8 variant? It uses newer dual CFM LEAP-1B engines. How does it compare? I can’t really find the data. The spec you’re referring to is for the older 737-800.

Another fortune cookie:

https://www.aircraft-commerce.com/wp-content/uploads/aircraf...

It suggests an overall savings of ~14% over the 737-800 but doesn’t look at specific takeoff/climb comparison.

I wasn’t posting the LLM output as a source of truth. I was just using it to question the uncited value. And I still really don’t know the answer. If you’ve got another data source I’d love to get it.


Why do people keep insisting on pasting LLM output to HN when every time it happens, it gets downvoted to oblivion? The community clearly doesn't want it. If we wanted to know a computer program's opinion about something, we could ask it ourselves.


I was using it to question that exact stated fuel consumption number without a citation. For hard data (like fuel consumption) getting a value from an LLM isn’t absurd.


If not absurd, it's very poor form. You should never use LLM as input for a discussion, nobody wants to hear that. Use it to search for authoritative sources.


It’s fine if you post an actual citation that you might have found through the LLM. Just posting AI slop is worse than useless, though, and also unpleasantly dystopian.


ok, how do we verify that?


Maybe he should ask Claude next.


That’s the point? I wasn’t suggesting it was correct. Just that the value is wildly different from their own non-cited number. The next stage was to get a citation from an actual datasheet. Their reasoning was nothing beyond “I’ve read”


I agree, well out of comfort zones. However to my reading multiple different things went wrong to get to this point.


That could be. We just don't know right now, but your intuition may well be correct, even if there is a single root cause there could very well be multiple contributory causes.


They failed to land at two airports before the third. I can't say if they made the right decisions but that already is two failures.


Go arounds are not failures.


They are expected situations, but still a failure of the original plan.


They are not a failure of the original plan, they are a mandatory component of the original plan that if everything is nominal never gets executed. Every pilot on approach is ready for one or even more go-arounds and they happen quite frequently for a variety of reasons.

They happen a few hundred times per day at ~100 k flights.


How much extra do you need? Enough that a pilot/crew doing their job properly will never run out of fuel and crash.

So yes they will do an "investigation". It's not a criminal investigation. It's to understand the circumstances, the choices, the procedures, and the execution that ended with a plane dangerously close to running out of fuel.

This will determine if there were mistakes made, or the reserve formula needs to be adjusted, or both.

Don't tell me about cost, just stop. Let MAGA-Air accept some plane deaths to have cheap fares.


With 6 minutes left everyone could have died if anything went wrong with the final landing, even a gust of wind could have ended everybody's life.


Could have, but pilots practice no fuel landings all the time (in simulators). If they can get to ground that is "level enough" nobody dies. It is not something you ever want to see in the real world (and in the real world people often do die when it happens), but it isn't automating people die.


I don't think that's all that true for airliners. Pilots definitely practice for engine-out scenarios during all levels of training up to the airlines, but the ability of a plane the size of a 737 to safely land on anything but a runway is...limited. And if you're low, slow, and trying to go around, that's not a lot of time to glide to ground that is "level enough".


i didn't mean to imply no runway landings. Landing on grass is questionable. They would practice water landings though


Those landings are practiced from a reasonable altitude.


Surely the issue is more that they decided to make so many attempts to land local. There should be a max level of attempts.


There is a lot of pressure on pilots to land local. But 3 go-arounds happens, not often, but it does.


Perhaps that decision needs to be removed from the airline and there needs to be an independent decision maker there.


Pilots are ultimately responsible for the aircraft, that's pretty much set in stone but if ATC would tell them to divert they would unless there already was an emergency.


There is a max level, and it is three.


Well clearly that number should depend on distance to next airport and how much fuel is onboard. It isn't sensible to have a set number when other parameters change.


It's far from all they can do, but it seems like they are focused on some important and achievable goals. These then feed into adoption which will bring other investment because Servo will be viable for more use cases. I see this investment more at the level of basic science research.

Which VC is going to be interested in implementing accessibility in this situation? The Sovereign Tech Fund is an organization that values this. It's too long term and uncertain of a project for most entrepreneurs to be involved in too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: