Hacker Newsnew | past | comments | ask | show | jobs | submit | indentit's commentslogin

I personally really enjoyed this article - thanks for sharing.

I have worked for companies which use OKRs to help decide what they want to achieve, but it was never clear to me how they decide what features to implement to reach those goals.

Seeing the "impact, confidence, and ease scores (ICE)" concept and how it should be done is quite an eye opener for me. Maybe that has been done where I work, just never shared with me - a lowly senior developer, who knows...


Specifications are there for a reason... Why use Bluetooth at all if they don't actually use it properly?


You can still connect AirPods to an android device using Bluetooth, you just don’t get the seamless connection or support for Spatial Audio that use the extended protocols


You can't even change noise cancel's mode.


It's just on and off, and doesn't let you choose between the different ones (transparency, conversation aware, etc)


> Why use Bluetooth at all if they don't actually use it properly?

Because they needed a way to get audio to the AirPods wirelessly and to work with their devices? That’s a pretty good reason to use Bluetooth.

I doubt they got together and tried to scheme a way to break Bluetooth in this one tiny little way for vendor lock in. You can use the basic AirPod features with other Bluetooth devices. It’s just these extended features that were never developed for other platforms.

HN comments lean heavily conspiratorial but I think the obvious explanation is that the devs built and tested it against iPhone and Mac targets and optimized for that. This minor discrepancy wasn’t worked around because it isn’t triggered on Apple platforms and it’s not a target for them.


It reminds me of the USB keyboard extender that came with old Macs. There’s a little notch in the socket so you can only use it with Apple keyboards. At the time I thought it was a petty way of preventing you from using it with any other device, but apparently the reason they didn’t want you to use it with other devices is because the cable didn’t comply with the USB spec.

Some pictures here: https://www.reddit.com/r/assholedesign/comments/b1u08k/this_...


Yes, USB extenders are not spec-legal (because the device isn't built expecting to be extended).

But you can have an extension cord which accepts USB on one end but doesn't accept USB on the other.

So the keyboard has a superset connector so that it can go in regular USB and notched USB, because it is verified to work right when using the extension cord.

This design also means you can't plug one extension cord into another to get an even longer distance (which the keyboard wouldn't expect). Pretty clever solution.


Did you even bother about reading the comments on your own citation?


Yes, I did actually. I genuinely don’t know what you’re referring to?


>doubt they got together and tried to scheme a way to break Bluetooth in this one tiny little way for vendor lock in.

No conspiracy needed, surely it would be unilateral? It seems exactly the sort of thing Apple Computers would do to protect their ecosystem.


Perhaps Apple correctly implemented the specification here


This is Microsoft's playbook from many years ago: embrace, extend, extinguish.


I tried using Mergiraf a year or so ago, and ended up with so many weird problems that I eventually tracked down to being caused by it, that I disabled and uninstalled it and never looked back - it was more hassle than it was worth


What kind of problems did you encounter? Could you provide an example?


Having to pipe to a pager - to follow the unix philosophy - means: - extra typing each time - the pager receives static output. There is no interactivity... Sure, most pagers can search. But there is no way to have a table with adjustable columns, or toggle word wrap or line numbers etc.

I feel that for a tool like bat, it is better to have it built-in and not follow the composable philosophy because it is just so much more convenient. Of course the minus integration in bat is fairly basic at the moment, I guess supporting different code paths for static pagers vs interactive would increase maintenance burden quite a lot...


The `less` command can toggle line numbers (`-N`, either as a command line argument, or while running it interactively) and toggle line wrapping (`-S`, again either way). It can also receive streamed output with `+F`, making it a viable alternative to `tail -f`. (I'm not sure if what you meant by static output is that it all has to be available before the command can read it.)

It can't adjust the columns in a table, but then I don't believe that `bat` can either, and I'm not aware of any similar program that can. (If there is one, please let me know.)

In the case of `bat` (or `man` or other programs that use a pager) it often requires no extra typing either, since `bat` will pipe it's output to `less` by default (or whatever you specify with the `PAGER` environment variable).

The `less` command can be quite a bit more powerful than it initially looks. I'd recommend looking over the man page sometime -- you might be surprised. In fact, looking over the `minus` page, other than being able to be compiled into another program so it doesn't have to be shipped separately on Windows, I'm not sure what it can do that `less` cannot. (If I'm missing some killer feature, please let me know. I'm not trying to bash `minus`, I just don't know what more it offers.)


Your top comment it's what happens when shiny hipster programmers refuse to read the less(1) documentation.


Isn't that what branch policies are for? It can get annoying when making a release (from a local machine as opposed to automatically in CI/CD pipelines), but in other circumstances it serves the purpose very well in my experience


Ooh, how do those work locally?

I've only encountered those on the server side.


git itself has no concept of branch policies, it is purely a server side thing. But surely it doesn't really matter what branch you commit to locally, if you can't push it, you haven't done any damage and can just create a new branch and push that instead?


Yes, but I'd like to avoid the "create a new branch, switch back to main, reset main back to origin, come back to the new branch" dance. And a git hook does that, but it's not trivial to set up (particularly when there are lots of repos).


Maybe create a shell alias which would act as a wrapper around git to do just that, when you try to commit on the wrong branch


The solution could be a pre-push hook. I am also not a fan of pre-commit hooks because I just want to commit my wip changes. Not stash. Commit.

It's fine if the auto formatting tool hasn't been run. If the pre-commit hook changes my files silently, that is a big no-no for me.

I have had tools break things before and it makes it very hard to work out what happened.

Having it fail to push means I get to choose how to fix up my commits - whether I want a new one for formatting changes etc or to go back and edit a commit for a cleaner public history.


I used to think pre-push was better than pre-commit but at some point I realized I was actually just kicking the can down the road and leaving bigger problems for myself. It's not downsides-free, but it's the better compromise for me.

100% agree on hooks being readonly.

Username oddly relevant to context btw.


> Another area where you can lean a lot more heavily on HTML is API responses

Please no - it is so much nicer and easier when using a website with poor UI/filtering capabilities/whatever, to look at the network requests tab from devtools in the browser and get json output which you can work with however you want locally versus getting html and having to remove the presentation fluff from it only to discover it doesn't include the fields you want anyway because it is assuming it should only be used for the specific table UI... Plus these days internet while out and about isn't necessarily fast, and wasting bandwidth for UI which could be defined once in JS and cached is annoying


> only to discover it doesn't include the fields you want anyway because it is assuming it should only be used for the specific table UI

It sounds like you're complaining that a server isn't shipping bits that it knows the client isn't going to use?

> wasting bandwidth for UI which could be defined once in JS and cached is annoying

How much smaller is the data encoded as JSON than the same data encoded as an HTML table? Particularly if compression is enabled?

ETA: And even more so, if the JSON has fields which the client is just throwing away anyway?

What seems wasteful to me is to have the server spend CPU cycles rendering data into JSON, only to have the front-end decode from JSON into internal JS representation, then back into HTML (which is then decoded back into an internal browser representation before being painted on the screen). Seems better to just render it into HTML on the server side, if you know what it's going to look like.

The main advantage of using JSON would be to allow non-HTML-based clients to use the same API endpoints. But with everyone using electron these days, that's less of an issue.


> What seems wasteful to me is to have the server spend CPU cycles rendering data into JSON, only to have the front-end decode from JSON into internal JS representation, then back into HTML (which is then decoded back into an internal browser representation before being painted on the screen). Seems better to just render it into HTML on the server side, if you know what it's going to look like.

Well put. I think the main issue is that we have a generation of "front end engineers" who have only ever worked with javascript apps. They have no experience of how easy it is to write html and send it via a server. The same html works now, worked 20 years ago, and will work 20 years from now.


This is why many websites (even ones light on interactive functionality) now display a progress bar after loading the website. It's a big step backward for user experience and I even see it on blogs.


Your average veteran "front end engineer" has implemented las month half a dozen features that almost impossible to do server-side only, "just send the HTML, dude".

Progressive enhancement, forms with fields depending on other fields, real time updates, automated retries when the backend is down, advanced date selectors, maps with any kind of interactivity.

Any of the above is an order of magnitude harder to do backend-only vs backend API + any frontend.


Who says you can't use any JS when doing server side rendering?

And why would you even want progressive enhancement if you can just send the proper full version right away, without waiting for MBs of JS to "hydrate" and "enhance" the page


You know, you went direct for the "bait" case, while ignoring all the others.

Progressive enhancement is often done to mask the fact that fetching the data takes an unacceptable amount of time, otherwise no effort would be done to mask it.

Your plan is to take that same unacceptable time, and add the server side render-to-html time on top of it, and that will improve it via...


The server rarely has to render/build the html. It will do it once and then cache it. 99% of websites don’t have real time data. They are just boring webpages.


> How much smaller is the data encoded as JSON than the same data encoded as an HTML table? Particularly if compression is enabled?

Well, as many things in life, it depends. If the cells are just text, there is no much difference. But, if the cells are components (for example, popover things or custom buttons that redirect to other parts of the site), the difference of not shipping all those components per cell and rendering them on the frontend starts to become noticeable.


How lucky then that WebComponents exist, and we don't actually have to ship the whole DOM for each cell...


So a backend-focused team that minimizes frontend is going to start fiddling with the DOM and shipping web components.

Sure, tell me more. I always enjoy a cool story.


Eh, I don't really think the frontend/backend distinction in webdev should exist. Your react components still get served from a backend too (even if they are static html).


As someone else who does this, I think we're a pretty small audience. People shouldn't be building web apps with our reverse engineering experience in mind.


And if you come back to this script in a few years time and it pulls a newer version of numpy with an incompatible api, there is no easy way of knowing which version it was designed to be used with...


Only if you didn't run `uv lock --script` [0] and commit the lockfile. If you don't want that second file sitting next to the script, you could instead specify an `exclude-newer` timestamp, and uv won't resolve any dependency versions newer than that.

It might be cool if uv ignored dependency versions newer than the script's last modified timestamp, but this behavior would probably need to be very explicit to avoid the situation where a working script is mysteriously broken by making an innocuous change which updates its last modified timestamp, causing uv to resolve a newer, incompatible dependency version.

[0] https://docs.astral.sh/uv/guides/scripts/#locking-dependenci...


You can of course absolutely use `"numpy~=1.12"` as a requirement.


The syntax highlighting on the code snippet is highly misleading...


I really like how clear and well laid out these rules are. It covers lots of things that I have always thought about when I write instructions in README files etc, so it's very nice to see everything neatly described/reasoned in one place. Thanks for sharing!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: