The 787 air quality is better for two reasons: it has a higher humidity and it contains less VOCs. The reason for both of these factors is that the 787 is designed to take cabin air directly from the atmosphere rather than from a compressor bleed valve as in every other aircraft.
This was a radical design decision for Boeing because bleed air usually has three key functions: it is hot enough to cause toxic ozone gas to decompose, it is at high pressure so can pressurise the cabin and it is hot so it can heat the cabin. Without bleed air Boeing had to do all these things with new components, they have an ozone filter/catalyst, an electrical compressor and a RAM heat exchanger to heat up the air (basically slows down the air increasing its temperature). The system is more complex but according to Boeing uses less energy [1] and makes servicing the engines easier.
For passengers, external air contains more moisture than the bleed air (air becomes less humid if it is compressed and heated) so the inlet air humidity is around 7% compared to 2% on other planes. But this is still less than the humidity in a typical cabin which is about 10 to 20%. Some of this is residual moisture that is recirculated but most is from passenger respiration and transpiration [2]. Even so the higher inlet humidity coming in can significantly slow down the loss of moisture over a long flight. In addition VOCs and other bleed air contaminants are minimised [3] which Boeing claims means we feel better after a long flight.
For me personally it is just general (mild) annoyance with a community that somewhat consistently likes to think it is smarter and better than others and which is then only ever willing to admit they were wrong in roundabout ways like "well this was all fun, anyone who thinks it was a waste of resources or what-have-you doesn't see how much impact it had".
You can see this wild speculation play out _commonly_ for lots of will-be fads like cryptocurrency, metaverse, prompt engineering, vector databases, "autoGPT"/langchain, GPT3/4 performance degradation, GPT4 architecture, and more.
People here dress it all up in well-written prose, citing their past experience at big tech or the ivy league, but at the end of the day much of it is as misinformed as a viral 4chan post. And then, as I said, there is very little postmortem from those same posters (although to be fair, I have seen several cryptocurrency people finally admit they were wrong).
edit:
For clarity, I am not encouraging a shame-based "admit you're wrong and I'm right!" attitude. That just results in more of the same but from the other side. I am merely condoning a healthy amount of humility and acceptance that it is _absolutely_ okay to be wrong, but that it is quite important to _admit_ it (if only to yourself) in fairly clear terms.
My frustrations are largely related to social media in general and the notion that scientists are gatekeeping seems to forget about the very real effects of misinformation. None of us like to realize it, but some people really have begun to take the word of internet comments over the word of credentialed takes and it is _ruining_ society in my opinion.
Some questions from someone who is obsessed with getting 0 frames of latency:
>the one frame of added latency with compositing and game VSync does not apply to Wayland
I thought that on X with vsync disabled, a fullscreen game will write pixels directly to the framebuffer on the gpu without going through the x server / compositor, so there will be 0 frames of delay. Is this correct?
> application contents can be put directly on the screen with the display hardware (which is called “direct scanout”. You may have heard the X related term “unredirection” used for the same thing, too).
So on a machine running wayland, direct scanout would let a fullscreen game render directly to the framebuffer without any copying, compositing, color remapping, etc etc, just pure pixel data to the screen without any intermediate steps, right?
And how does this work exactly, like does the program that wants to render to the screen have to call some API function to explicitly opt-in to direct scanout? Because most games, even the ones designed to run on linux, are not aware of wayland.
Also I think something is up with the final measurements, on his 120hz monitor one frame of latency is about 8 ms, but the lowest he ever measured was 19ms so it seems like something is adding at least a frame of latency.
Windows handles this by giving the foreground window time slots that are three times longer than normal (unless you are on a server version or change the setting).
It also boosts the priority of threads that get woken up after waiting for IO, events, etc, as opposed to being CPU bound. Which I guess Linux's fair scheduler also kind of ends up doing, even if via an entirely different mechanism (simply by observing that they used less CPU).
It's interesting how different operating systems take completely different approaches to their schedulers. Linux seems to try to make quite sophisticated schedulers, trying out very different concepts, but keeps the scheduler very seperated and "blind" to the rest of the system. Meanwhile Windows has an incredibly simple scheduler (run the highest-priority runnable thread, interrupt it if its timeslot is over or a more important thread become ready, repeat) and puts all the effort into letting the rest of the kernel nudge priorities and timeslot lengths based on what the process or thread is doing.
Don't work at reddit, but if they are going IPO, the employees are probably holding onto their stock and crossing their fingers until it is worth something.
I actually read this article for some random reason (it happens sometimes) and HN has a similar problem - there are third party apps that charge money. HN has always taken a friendly approach to third-party projects as long as they make it clear they aren't official and include links to the original HN comments/submissions, but the charging money part doesn't feel right to me. I'd rather that everything HN-related be free and in the spirit of just-because.
Since Google started charging us quite a bit for the Firebase API a year or two ago, we're in a similar position to what the article describes - we're paying for https://github.com/HackerNews/API and thus indirectly funding the commercial apps.
Btw this is not to take a side on any of the Reddit stuff - that would be ultra dumb of me and anyway I'm succeeding at staying ignorant of the details. But I think it's worth saying something publicly about the analogous issue on HN because we might have to do something about it at some point.
The current API isn't particularly usable (you have to request individual items to get an entire thread) and our long term plan is to replace it with a more usable one that just gives a JSON representation of any HN URL and is hosted on our server. I've been posting about that for an embarrassing number of years (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...) but you all know how slow we are. Anyhow, whenever we get there, we'll probably do proper things like have API keys and terms of service and whatnot. If the commercial app thing is still a thing at that point, maybe we can find some decent way to address it.
Reddit may have a user agreement that allows them to use the data forever, but historical versions that were not visible (effectively soft-deleted on edit) puts this into a risk area as it removes the ability for someone to control the data thus transferring ownership far more explicitly to Reddit.
Reddit should be very cautious here... you only get a lot of protections (EU eCommerce Act, etc) but being mere conduit. Arguably volunteer moderators took on a lot of the liability by their act of moderating, and Reddit were able to distance themselves from a lot of the content on the platform.
But once Reddit go intervening in the content, choosing to make some old revisions public, they're now taking editorial decisions that strip them of a lot of the things that protect them.
This is an incredibly naive move in the big picture.
HN being endlessly contrarian is really weird sometimes. You are in real-time witnessing the rebirth of community hosted and run forums. The thing HN has been lamenting the death of forever.
This isn't some "migrate from walled garden A to walled garden B", this is the community setting up new-age phpBB except it's federated and interoperates with any and all ActivityPub.
You can subscribe to /r/startrek from your Twitter account. You can add people on Twitter and blogs to your Reddit feed. If this ends up not succeeding I wouldn't gloat because this is the endgame us tech nerds have asked for forever.
Tape drives have multiple heads for IO, which allows some to fail along the way and you can still read/write your data. This sounds good - but it actually just means these tape drive heads are flaky. What do you do when too many fail? You call IBM or whoever and get them to replace your tape drive, or "reman" it by replacing the failed heads. This is the only way they actually achieve their warranties around lifetime read/writes.. they assume you'll fix the hardware along the way.
These HAMR drives have the same problem. The "heat assisted" just means they're using a laser to heat up a piece of gold, and sometimes this means the gold kinda drips around, and the head can be ruined. So their read/write lifetime numbers are pretty loose compared to PMR, and there is an assumption you'll "reman" these drives if they start to have failed heads for IO. However, the gold can even drip onto the platter, giving you permanent data loss anyways.
Lastly, they use SMR to get this density. SMR is not like PMR. PMR is what you think of with an HDD with many small blocks either 512B or 4KiB which you can read/write to. SMR has 256MiB (or 128) "zones" that you can only append, or reset. This means instead of being able to write randomly across the drive's capacity, you need to plan out your writes across appendable regions. This complicates your GC, compaction, and reduces your total system IO. Your random read performance is still better than Tape, but this basically turns your HDD solutions into something that looks a lot more like Tape. This is incredibly unpopular technology for this reason.
The market for these things is much smaller than most HDD vendors would like. You have a few hyperscalars that have figured out how to write out backups efficiently, but they want a lot of read IO, and reducing the spindle to byte ratio means you have less total system bandwidth to your data.
This means that the price per byte would actually need to be lower than their PMR drives, let alone wildly better warranty agreements, for these to be better TCO than current generation drives.
Yeah. There’s a story I heard from the launch of Zelda: Breath of the wild (the last one). After what I assume was a flurry of work and crazy deadlines, they sent everyone on the development team home for 2 weeks to rest, and maybe just enjoy playing the game they had made with their family before it launched. After coming back in to the office, needless to say everyone had little things they personally wanted to tidy up or fix before the launch. So they did.
This seems like an obvious thing to do, but I’ve never heard of anyone else doing it. If your game is amazing, why isn’t your team taking the time to enjoy playing it?
GPT basically has access to two contexts: its internals, and the prompt that it gets
Then when you ask something to ChatGPT, it takes your prompt and it generates a new prompt that includes the previous messages in your session, so that GPT can use them as context.
But, there’s a limit to the size of the prompt. And it’s not that big.
So then ChatGPT’s magic is figuring out how to crafts prompts, within the size limits, to feed GPT, so that it has enough context to give a good answer.
Essentially, ChatGPT is some really amazing prompt-engineering system with a great interface.
I don't understand why trampolines are needed at all, I would think you could pass the stack reference (or the captured variables references) as extra argument to the nested function, and pass them around together with it's address as a struct. Isn't this what lambdas do in C++?
I remember struggling to write essays when I was in highschool. Well, I was fine, but my teachers insisted it was long-winded gibberish. Concerned, my grand’mother told me: “Ask Sonia” I knew she was her friend, and they liked to argue a lot, but I was a bit confused by the advice.
It turns out, editing was Sonia’s job: she was the head reader at a very prestigious publishing house, meaning she was giving notes and feedback to very famous authors, including four Nobel Prize laureates. You kind of have to know what you are doing when you are sending a manuscript full of red in the margins and the person can respond “I’ve got a Nobel Prize and you don’t.” She definitely had the icey stare to match.
Oddly, her advice was incredibly simple, and fitted in two very short pieces:
* Subject, Verb, Complement –– in that order. If you see two verbs, but a period between them.
* Things are confusing if you don’t put them in order: start by the beginning, find the widest piece of context that explain the rest.
I don’t apply her rules every time, but for every technical document, every time I’ve tried, it’s been night and day.
That typographic argument is really resonating with me.
Vetos are somewhat overrated. Agents may have the right to veto, but might not have the power to deal with the consequences.
Russia has a UN veto and uses it, because they can weather the consequences.
Could Hungary piss off all of its allies, trade partners, basically everyone but Russia? Doubtful. Hungary is playing a skillful, if somewhat devious, game of balancing between Russia and the West. But an outright FU to a (clearly) US-sponsored NATO expansion? I can't see them weathering this.
Signalling a veto is often rather a negotiation step.
I'd say it's the recommended book for type systems. For type theory, I'd recommend "Certified Programming with Dependent Types" by Adam Chlipala or "Programming Language Foundations in Agda" by Philip Wadler.
The article's implementation grows the stack unnecessarily through recursively nested closures. A commenter pointed out a flaw in my code. This new implementation is even faster and more concise by doing away with the closures (other than the single inner closure):
I kind of disagree with this. For the post-Beethoven era, sure, but the standard way of playing Beethoven and before tends to be heavily biased towards an uninformed modern idea of what classical music "should" be, which is stodgy, slow, smooth, heavily refined, overcooked with crescendoes and decrescendos, and quite inauthentic. Almost as though it's shameful for music to be immediate and exciting at the risk of being unrefined. If you compare Furtwängler's[1] Beethoven symphonies to Christopher Hogwood's... one sounds like music for wealthy old people at a concert hall. The other sounds like music. (Don't get me started on the routine butchering of Bach...)
But that's just my opinion. The point is it really pays to try different conductors, performers and performances before the 1830s or so. I'm not sure why that time changes things, but I'd guess the romantic style that came after more closely aligns with artsy-fartsy performance practices, and that composers also became more careful about marking how their works should be performed. Anyway, trying different things is also a good exercise in developing personal taste, which is a key ingredient of music appreciation.
[1] I originally had von Karajan here, but I went back to have a listen and it wasn't as bad as I remembered. Substituted Furtwängler's inexplicably famous recordings, which sound like the entire orchestra took Valium: https://www.youtube.com/watch?v=3bOxcryX1VE . Compare to Beethoven's recommended tempo and historical style: https://www.youtube.com/watch?v=-Y07M8e5g-Y
There are too many people who think their questions are the important ones to be addressed before we are allowed to make progress. The more gates/approvers you add before changes can be made or new ideas tried out, the less likely you are to build great things and build fast. Instead you get slow, bureaucratic, design by committee solutions and the more unique technological directions get shelved. One of my favorite Bezos quotes is about this “even well-meaning gatekeepers slow innovation”.
You'd think the UK grid has better tech for revenue protection (Aka protection against stealing). There have been a lot of companies who specialize in doing this in the US. Its a combination of data processing and decent quality hardware on the grid.
My suspicion is they haven't done upgrades on their systems...
I was looking at this the other day to learn OS stuff. Too bad, as it seems like a very nice design.
Does anyone have any OS course / book recommendations?
I've worked my way through the excellent MIT course on xv6 [0], but I'm not sure what to work on next. Something related to Linux or one of the BSDs would be nice to see how things are done in the real world.
I cannot recommend this MIT course enough for those getting started. The projects are set up in a very nice way (i.e. if you can't complete one tricky assignment, you don't need it as a prereq for later ones) and the code is very simple. They've also gone to great lengths to set it up in a simple way (e.g. using cross-compilation for RISCV on qemu). It's also a great experience to really understand OS, as you'll make mistakes that will leave you scratching your head until you realize you messed up your page table and yeeted your kernel stack out of your address space.
I don't fully agree with that.
- Cheaper: Well most decent banks I have been working with don't charge on USD wires, or they charge something like $20 flat, which happens to be on par or cheaper than the ethereum blockchain fees nowadays
- Faster: In my experience (in sending hundreds of wires every month), the actual "sending time" of USD wires via Fed wire or SWIFT is literally minutes, so that's on par with blockchains. Most of the time the funds are just stuck somewhere in the compliance department. The same would hold if you withdraw/deposit USDT/USDC to an exchange. Funds can still get stuck waiting for compliance approval.
- Easier: That's debatable. I guess some banking UIs are better than others. And blockchain is not particularly known to be UI/UX friendly either.
One definite advantage I could think of is that when you hold USDT/USDC in your wallet, then you have full control over them, but I bet 99% of the USDT/USDC is held on exchange, and exchanges are more and more starting to act like banks (with all the slowness that that entails).
I really, really don't understand where cryptocurrencies are going. Billions have been poured in this technology for almost no actual return technologically wise.
And I understand, because interest are not aligned. It really seems like nobody actually care about advancing the state of art, except actual researcher who work in universities and not in crypto startups...
Cryptocurrencies had a goal and a usefulness when they were at least used to buy drugs online and send a few bucks to your friends. Now you cannot even use them for that. Everybody saw the rise of the bitcoin valuation and since then, all that matter is how much you can make by trading crypto. The actual usefulness of the crypto, what it can do, is utterly irrelevant. This is proven by btc being one of the most traded and valuable crypto even though it is one of the crypto with the less feature. Most btc trade are not even going through the bitcoin network but through third party, so btc even fail at its own goal.
And because crypto are seen as an investment, everybody is holding, and nobody is using it, and you would be a fool to do so, since tomorrow, your 1 shitcoin might be worth 3 times what you brought it for. And no, "stable coin" is not an answer since the only thing they have been used for is to trade between crypto...
Then, to make matter even worse, the crypto community is its own worst enemy. Since it has been proven that, to make a few buck, all it takes is "creating" your own crypto and mining the first few block before other get in on it, you get thousand of startup coming out of nowhere proposing new crypto that are literally clone of other crypto. Tell me, what is the difference between Polkadot and Kusama ? Or is there even a significant technological difference between Litecoin, Bitcoin, Bitcoin Cash, Dogecoin, etc ?
We are more than 5 years into this crypto mania with an ungodly amount of money being thrown at it and there has been no significant technological progress, no crypto that is useful enough to be used by anybody else but crypto-fanatic trying to pump and dump it, and this is now becoming one of the worse climate disaster in a while. And all the while, every people invested in crypto will just talk about price, valuation, investment, hodling, etc. It is clear what is their real purpose.
If you want to see actual progress in distributed computing, look for scientific paper coming from universities.
> IBM Canada has won $1.5M contract to develop new platform for IRCC
This will buy what? A team of 4 interns and new grads for 6 months. At the end they'll have a slide deck with too much wordart in ith containing some half-baked ideas, and a few lines of code that don't do anything useful at all and are further from production readiness than starting from scratch.
The old story of the Chinese general Chen Shen comes to mind:
Apparently he was running late due to rainstorms and the penalty for appearing this late to the Qin emperor was execution. Since that’s the same penalty for open rebellion, he decided he might as well try that too. And that was the beginning of the end of the Qin dynasty.
https://en.m.wikipedia.org/wiki/Chen_Sheng_Wu_Guang_uprising
If the penalty for lying and being caught is the same league as screwing up, people are going to cover up problems.
Basically your argument is, "Bitcoin may not incentivize improvements in energy efficiency, so we should scrap Bitcoin and use something more efficient." Great, let's do it. The technology is already here, in fact, the technology was always there. Have a consortium of banks operate a distributed public ledger. The banks will charge fees to include transactions on the ledger, just like miners charge fees now, and the economic incentives to raise or lower fees will be similar -- but the fees will be lower because the consortium members will not need to cover the electricity cost of mining, only the cost of operating their servers.
The only problem Bitcoin mining solves is the lack of identification among the miners (i.e. the "Sybil attack" problem). Seems pretty obvious to solve the problem by introducing identity, particularly given that the large miners / mining pools are not really anonymous. Sacrifice the small miners who contribute little to the system, set up a consortium, and enjoy whatever benefits the distributed ledger provide without the waste.
(Yes I know, banks are evil, how dare anyone suggest that we acknowledge any authorities in a system, why should we trust the banks, etc.)
To be fair, despite the heavy criticisms, it's not clear whether the EU objectively made any major mistake. The countries ahead of them in the vaccination effort did so by writing a blank check to the pharma companies - definitely not an ideal outcome either.
EU diversified their vaccine contracts, negotiated fair prices, generously funded the manufacturing rollout, avoided vaccine nationalism between member states, equitably distributed the vaccines amongst their members and the key point - holds the manufacturers liable for possible damages.
All of those points are entirely reasonable. One could argue that EU should have paid more, but there are two counterarguments:
Firstly, it doesn't really move a needle, as both US and UK have home-front-first clauses, leaving only Israel and Arab oil producers skipping the queue. Secondly, said countries could just amend their contracts, bid up the price, and everyone is back at square one.
Date of contract signing doesn't really matter either - a credible commitment to massive orders has been there from the very beginning.
European Medical Agency gets a lot of undeserved flak for being slow and bureaucratic. Pfizer was approved mere 10 days later than in the US, Moderna got approved earlier than in the UK, and now AstraZeneca earlier than in the US. Allegedly, even those slippages were caused by the companies simply prioritizing their US/UK approvals.
Partially, EU got unlucky - Sanofi vaccine flopped, AZ plant has technical problems, Curevac got delayed. In a world where Pfizer and AZ wouldn't underdeliver by an order of magnitude, EU plan would be praised and touted as a gold standard.
Partially, EU was always fighting an uphill battle - the corporations that cleared the approvals simply care more about their image in the US/UK, don't like to face competent regulators, and really dislike being liable. Afaik, they face no liability in US and Israel.
That being said, absent the EU policy, I'm quite confident that small countries like Latvia, Slovenia or Finland would end up with the short end of the stick.
"To test for asymptomatic infections, participants in COV002 in the UK were asked to provide a weekly self-administered nose and throat swab for NAAT testing from 1 week after first vaccination using kits provided by the UK Department of Health and Social Care (DHSC)"
Moderna:
https://www.nih.gov/news-events/news-releases/phase-3-clinic...
"Investigators will closely monitor participant safety. They will call participants after each vaccination to discuss any symptoms and will provide participants with a diary to record symptoms and a thermometer for temperature readings. If a participant is suspected to have COVID-19, the participant will be asked to provide a nasal swab for testing within 72 hours."
https://www.nejm.org/doi/10.1056/NEJMoa2035389
"although our trial showed that mRNA-1273 reduces the incidence of symptomatic SARS-CoV-2 infection, the data were not sufficient to assess asymptomatic infection, although our results from a preliminary exploratory analysis suggest that some degree of prevention may be afforded after the first dose"
Pfizer:
https://www.sciencemediacentre.org/expert-reaction-to-phase-...
"The Pfizer study did not report studies of the impact of its vaccine on asymptomatic infection. The Oxford study reported that efficacy for “asymptomatic or symptoms unknown infection” based on weekly self-swabbing was 3·8% (−72·4 to 46·3%) following standard dose and 58·9% (95% CI 1·0 to 82·9) following low dose regimen."
https://www.nejm.org/doi/10.1056/NEJMoa2034577
"These data do not address whether vaccination prevents asymptomatic infection; a serologic end point that can detect a history of infection regardless of whether symptoms were present (SARS-CoV-2 N-binding antibody) will be reported later"
PS: Sorry, but it took a little while to dig up original sources to random news articles I'd read months earlier
This was a radical design decision for Boeing because bleed air usually has three key functions: it is hot enough to cause toxic ozone gas to decompose, it is at high pressure so can pressurise the cabin and it is hot so it can heat the cabin. Without bleed air Boeing had to do all these things with new components, they have an ozone filter/catalyst, an electrical compressor and a RAM heat exchanger to heat up the air (basically slows down the air increasing its temperature). The system is more complex but according to Boeing uses less energy [1] and makes servicing the engines easier.
For passengers, external air contains more moisture than the bleed air (air becomes less humid if it is compressed and heated) so the inlet air humidity is around 7% compared to 2% on other planes. But this is still less than the humidity in a typical cabin which is about 10 to 20%. Some of this is residual moisture that is recirculated but most is from passenger respiration and transpiration [2]. Even so the higher inlet humidity coming in can significantly slow down the loss of moisture over a long flight. In addition VOCs and other bleed air contaminants are minimised [3] which Boeing claims means we feel better after a long flight.
[1] http://www.boeing.com/commercial/aeromagazine/articles/qtr_4...
[2] http://webserver.dmt.upm.es/~isidoro/tc3/Aircraft%20ECS.pdf
[3] https://www.faa.gov/data_research/research/med_humanfacs/oam...