Hacker Newsnew | past | comments | ask | show | jobs | submit | kenOfYugen's commentslogin

TL;DR The Unhook browser extentions makes YouTube usable.

Searching for something on YouTube does not work as expected at all. At some point they started deleting older videos with not that many views, and I remember at one time that I tried to access a video through a bookmark because it would not appear in the search results, that I was asked to state if this video should be removed or kept.

It didn't get deleted thankfully, however I could not find it easily again via the search results because of all the crap recommendations.

What I have found that makes the search work again is the Unhook extension [1,2]. By applying a few tweaks, YouTube has become usable again, as it removes most of the unrelated recommendations.

As mentioned in other comments, uBlock Origin and SponsorBlock are a requirement if one values their time. There are more efficient ways to support creators instead of watching random annoying ads and sponsor segments.

1. https://addons.mozilla.org/en-US/firefox/addon/youtube-recom... 2. https://chrome.google.com/webstore/detail/unhook-remove-yout...


I rarely see this mentioned, but you don't have to compress hydrogen that much for transportation/storage. Ongoing research on metal hydrides is promising [1].

1. https://en.wikipedia.org/wiki/Hydrogen_storage#Metal_hydride...


You’re adding yet more weight and inefficiency.


I ended up writing a small wrapper on top of SQLite based on this: https://dgl.cx/2020/06/sqlite-json-support

With proper concurrency control, it can work very well even for multi process applications.


According to dosdude1's thread on macrumors.com[1], more devices will eventually become compatible and wifi will get patched.

1. https://forums.macrumors.com/threads/macos-11-big-sur-on-uns...


> I am incredibly happy with every update from Apple.

I am incredibly disappointed with every update from Apple.

I bought my MacBook Pro in 2011. The most basic 13.3" machine that was available for sale. 320GB 5400RPM HDD, i5, 4GB RAM, Intel graphics.

I like backups. The optical drive had to go, I put it in an external USB enclosure. It was replaced by the HDD and in its place I put a Samsung SSD. The HDD contains my data archives and a macOS installation image.

Whenever macOS would become cluttered, I would wipe the SSD and enjoy a fresh macOS installation. Whenever a new macOS version would be released, I would update the installation image.

This enabled me to work on the go with no fear of data loss and no network dependencies.

SSDs are basically long lasting consumables. I had to painlessly replace the drive twice, however each time I got a faster and more robust unit with enhanced capacity. Sure I can carry an external drive with me. I have no desire at all to do that.

I like the mini jack, so that I can plug my favorite headphones. The audio-out port also doubles as an optical-out. Sure I can carry a couple of dongles with me. I have no desire at all to do that.

Sure I could have bought the top of the line 8GB RAM model back in 2011. I had no desire at all to do that. I could just buy 16GB of RAM and perform an upgrade Apple claims is not supported.

The battery lasts long enough. While coding, I get drained before the battery does. I can let the battery age with no fear of it bulging because it's contained in a shell. Replacement is easy.

I totally get where you are coming from, and agree with you that "a tiny fraction of users" want access to the hardware. However 100% would want access to the hardware when their 16" MacBook Pro SSD goes.

Don't professionals require machines that cater for their needs? If the current Apple offered choices like the ones I mentioned above, would you not pay a premium to have them?

I certainly would. And for that reason I still rely the same 2011 MacBook Pro and not the latest MacBook GoodEnough.


Mean Piston Speed [1] is a good indicator of engine longevity.

~16 m/s for automobile engines

~25 m/s for Formula one engines

~26.5 m/s for Koenigsegg’s 2.0-Liter

[1] https://en.wikipedia.org/wiki/Mean_piston_speed


Interesting that they use the mean of the absolute value instead of root-mean-square as in other sinusoidal applications (63.7% of the peak value vs. 70.7% for RMS).

RMS has all sorts of interesting properties, being directly proportional to effects that result from the square of the quantity being measured such as force on the connecting rods or acceleration of the piston, but mean piston speed is easier to calculate from familiar quantities to an automotive engineer like stroke and RPM. I wonder if engine longevity is actually proportional to mean piston speed or RPM, it would be easy to mistake the 7% difference given all the confounding factors...


If they're only different by a constant factor, then both have the same interesting properties and neither is much more difficult to calculate than the other -- at least for sinusoids.

If something is proportional to one, it's naturally proportional to the other.


Almost anyone on this website could answer better than me for this, was always weak in math, but I believe they differ by a constant factor for a sine, but for a more complex waveform they will not (well, the amount they vary by would be different for each waveform).


It's true that the factor between RMS vs peak-to-peak is different for e.g. a sine vs a sawtooth wave, but for other waveforms its still a constant (just a different one), and for this engine it should be just about a sine wave anyway.


Circular motion about a crank produces a precise sinusoidal waveform.


And Formula 1 engines are only meant to last hours (yes really, most engines don't even last one season), albeit at ridiculously high stress levels.

If we extrapolate from this, where high performance drag cars typically last minutes (20 years ago they only lasted seconds), that would mean this engine might only be good for a couple of hours of driving around the track. Assuming this is true (I am not saying it is), this engine would be pretty worthless for anything other than being a collector's item or being used for 1 or 2 races before it had to be retired.


and F1 tires only last a few laps. it’s all designed in. the F1 engines don’t expire in a few races because they can’t build them more robust, they expire in a few races because the rules require them to last that long. they could last all season (yes, with same performance) if they were required to do so.


Indeed. Back in the days they used to weld the cylinder heads to the engine block before qualifying, so they could run it that bit harder to get that extra bit of performance. Obviously not something that increases the lifespan of the engine...

Similarly in current F1, they know quite well how much life they have of the engine, and how much life a quali lap takes from the engine compared to a calm outlap.

If the regulations mandated a single engine per season they could do it, though they'd mostly just turn everything down.


I'd be surprised that they could build tires to go on for 22 GPs with the same performances, but who knows. The goal was raw speed when there were multiple manufacturers. The only year with a rule to forbid tyre changes during a race was 2005. Maybe you remember that Indianapolis GP with only 6 cars racing because thr banking destroyed the tires of the other manufacturer (which won all the other GPs.)


7 races per engine including Saturday practice and qualifying. It's about 5 hours per weekend times 7. 35 hours, which a commuter car does in about 10 days.


> yes really, most engines don't even last one season

"Even" one season? If they last more than one race it means they didn't push it hard enough so it makes sense that the engine last just marginally more than the race.


The new rules set the limit at 3 engines per season, which is 21 races plus testing. So it's a balancing act, but you definitely need to reuse the engine for more than 1 race.


And for those not in the know, a F1 race is ~305km, and they have do two days of practice plus qualifying in a race weekend using the engines they have (same engine for qualifying as for racing). There's some more detail in this[1] article, where they point out the Mercedes F1 engine did over 3000 miles (~4900km) during pre-season testing without issues (most in race-like conditions).

That said, from my impression it is usually the turbo or the hybrid systems that break down, it's rare for the actual engine block to be the issue barring specific production issues.

[1]: https://autoweek.com/article/formula-one/mercedes-f1-engine-...


F1 regulates the maximum number of engines a season (to 3 currently). So they have to last ~7 races.

Edit: old numbers updated


Koenigsegg Gemera use their direct drive system, with only one gear. So the engine will only see max revs when you're traveling at top speed, which will probably be quite rare since it's 400 km/h (249 mph).


I can't say I fully comprehend the SSR vs SPA war. Both concepts are definitely being misused/overused, one is "dated" and the other "modern", but surely the middle ground where best usage of both can be attained is prefered. I don't know of any terms that mediate the opposition other than perhaps "hydration"?


This is the key. There are super developed SSR frameworks and super developed SPA frameworks, but getting the two to work together is like pulling teeth. No one tries to write an SSRSPA framework because that would be considered too "opinionated", but maybe that's exactly what we need.


Doesn't nextjs do this already? You write an SPA that nextjs implicitly also knows how to render server side


Yeah, that's essentially the purpose of Next.js and it does it very well.


If you're using node.js and redux, then https://github.com/faceyspacey/redux-first-router pretty easily gets you there. Other than turning route changes into redux actions, I don't find it particularly opinionated.


I made https://www.roast.io/ because of this.

Most of the current SPA frameworks can re-hydrate SSR'd JS without issue, so instead of configuring SSR, just use a headless browser, which is what Roast does.


Ok, I was confused by your comment and the other person's comment. SSR and SPA are not mutually exclusive things.


They aren't mutually exclusive, but they are either a) at odds with each other because the frontend is in javascript and the backend is in Rails or Python or something else, or b) completely in sync because the backend and front end is in javascript, but terrible because the backend and the front end is in javascript. With web assembly on the horizon we will soon have Rust/Crystal/Go-driven unified backend+frontends, and then we can finally exit this dark age of JS being the only front end language.


I have not had the chance yet to play with it but it’s very high in my list, drab an extension library for the Phoenix framework “providing an access to the browser's User Interface (DOM objects) from the server side”.

A friend did an experimental library way back using server sent events for the Yii framework. I don’t think I fully appreciated the idea back then. Admittedly Elixir seems a better fit for this pattern than PHP tho

https://tg.pl/drab


As someone that prefers python it is unfortunate that the only way to get SSR is to commit to JS but with web assembly I think we will eventually have tools such as React and Redux be translated to other languages.

As far as performance goes js isn't too bad. It's async by default and a lot of effort has gone into making it run fast.


Yeah I actually have few problems with js itself. It's npm and this tendency to have a million 3 line dependencies. In, for example, the ruby gems ecosystem, there just isn't that problem.


I am using an even more minimal subset of CoffeeScript 2, basically classless, and love it. I can't find a sane reason to hand-write ESxx.

I complement it with Rust as needed, which is the only language that justifies having a verbose syntax.

Those two couldn't be farther apart, but their combination covers an extremely wide spectrum of programming tasks. From prototyping, to incrementally improving the robustness and performance as needed.

jashkenas did a great job, and CS2 is a very elegant and ongoing effort.


What do you mean by 'first class support' of WASM in C/C++? Is it somehow possible to produce wasm binaries without utilizing emscripten?


Yes, llvm has built in wasm support for a while now.


The plan for emscripten is actually to replace its own wasm compiler with LLVM, and possibly drop its asm.js compiler in favor of running wasm2asm on LLVM's wasm output: https://github.com/kripken/emscripten/issues/5827

Aside from keeping emscripten's code smaller / more maintainable and allowing the team to focus more on its role as high-level tooling, this should improve the size and performance of emscripten output since IIRC it's currently missing out on a lot of optimization opportunity by producing wasm as transpiled asm.js.


> IIRC it's currently missing out on a lot of optimization opportunity by producing wasm as transpiled asm.js.

That's actually not true: the asm.js => wasm path emits better code (smaller, faster) than the wasm backend path currently.

However, the wasm backend path is being improved, and should eventually get to parity.


Ah okay, interesting. Is it because wasm doesn't yet add any new functionality over asm.js that using asm.js as an intermediary step isn't inherently worse?

In that case, it sounds like the LLVM backend will only yield clear user-facing benefits when new features like pthreads are introduced?


Well, the "asm.js to wasm" path actually isn't pure asm.js anymore. We added i64 support and other things a while back, as intrinsics. So the asm2wasm path isn't limited by asm.js. It's weird ;) but it produces good code...

The wasm backend does have other benefits, which is why we'd like to move emscripten to use it by default:

* It uses LLVM's default legalization code, so it can handle LLVM IR from more sources (i.e. not just C and C++ from clang).

* We can stop maintaining the out-of-tree LLVM that asm2wasm depends on.

The LLVM wasm backend isn't ready yet (larger output code, slower compile times, a few missing features) but it's getting there.


Incidentally, that's how this target works in Rust; we also have an emscripten-based target.


I like this approach a lot, especially as implemented in [1].

Perhaps a tutorial, or some examples can be documented in that style. Hyperapp + hyperscript-helpers[2] is my favorite combo at the moment.

1. http://dave.kinkead.com.au/modelling-the-boundary-problem/

2. https://github.com/ohanhi/hyperscript-helpers


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: