When OpenAPI opened the polymorphism door with 'oneOf' and the like, it seems like it turned into a shitty language written in YAML rather than being a good concise way to communicate API design.
Former company enforced OpenAPI specs to be able to publish any API endpoint, devs just wanted to push code, so they made vague shit specs because it's pulling teeth to do it the right way (didn't help that the spec enforcer couldn't fully read YAML documents with references, so copypasta was rampant)...
I guess there's the endless cycle of 1) a format is created, 2) the format evolves to do more, 3) winds up being overbearing, 4) a new format is created...
I've experienced the mind-numbing frustration of API specs that don't match the implementation, which is why I have embraced the concept of requiring the spec to be part of the implementation. For example, the express-openapi package for Node.js expects the specification to be supplied as a property of the very function implementing the operation. This permits middleware to access the OpenAPI specification directly and utilize it for coercion and validation of requests and responses, so you get several birds with one stone.
I've also experienced enterprise OpenAPI deployments where the API specifications were owned by a separate enterprise API architecture team and were fed into some central consumer-facing documentation portal along with whatever unbeknownst infrastructure sat between the application and the Internet. Developers had access to the spec repository, but API architects reviewed PRs and made any recommendations for normalizing interfaces or using existing or canonical types.
Either way works, IMNSHO.
Edit: I should also say that I have likely misrepresented the deficiencies I experienced with API Blueprint. Take it with a grain of salt; I literally do not remember because it's been ... checks code ... nearly five years since I touched that project. The issue I ran into may have been as simple a problem as providing two example POST requests with slightly different payloads or two example HTTP 200 responses with slightly different response bodies. This is something that API documentation might often include to show multiple common use cases. It may have not have even been part of the actual spec or it may have been only a limitation of the specific UI renderer I was using.
Fluoride doesn't bio-accumulate like PFAS do, which has a strong affinity for proteins and fats in organisms. Constantly drinking water with 0.5-1 ppm fluoride may cause minor side effects like mild dental fluorosis, but you'll excrete almost all the excess as it's very water soluble. Drink water with any PFAS, and your body will strongly hold on to it all.
"0.5-1 ppm" covers what's considered the optimum level of fluoride in drinking water, so I doubt you'd get dental fluorosis from that. Coincidentally, if you calculate the equivalent dose that you'd give babies (via dissolved vitamin D + fluoride tablets), you also end up at about 0.3-0.5ppm.
Invent seemingly fantastic new material. Discover it is harmful to humans and wildlife, accumulates in groundwater, etc. Bury that discovery.
Get caught after decades of wild profits, the occasional secret settlement, and spend a decade more fighting legal action before finally running out of appeals or the writing is on the wall, and accept it and pay out.
Start selling water filtration systems, thus profiting off people dealing with your pollution.
This is what I find so frustrating about "the fight against cancer." I'm convinced cancer is so prevalent because corporations are poisoning the shit out of our environment, and thus our water supply, our food, our air. Because we're not equipped with timestamping chemical detection systems, it's difficult to identify the exposure that caused it or increased the person's risk, so industry gets a "freebie" death nobody can pin to them. As long as the chemical isn't toxic enough to be obvious - the companies get away scott free, despite an extensive history of the chemical industry time and time again coming up with some major novel chemical that comes to be used all over society and turns out to be toxic.
Bill Moyers once submitted his blood to a lab and asked them to test for everything they could identify in terms of industrial chemicals, pesticides, etc. The blood was a veritable toxic soup (and some of the control sample containers were contaminated from the supplier, showing how pervasive the toxins are): https://www.pbs.org/tradesecrets/problem/popup_bb_02.html
You don't "fight cancer" doing walks and charity balls and cute-kid-starts-fundraiser-because-friend-dies-from-leukemia. You fight cancer by addressing the toxins being pumped into us in the name of profit and "bettering society", allowed to get away with it because of how difficult it is to show any particular chemical directly caused the cancer.
Is that much of a problem for a catalyst? Presumably you do not need many of these: at water treatment plants and at the waste-stream for manufacturing processes which emit PFAS. You might not be able to justify the expense inside your home water purification system, but it could still be cost effective for large scale installations.
You would need a lot of catalyst because the water infrastructure to supply several hundred million people in the US is massive, let alone the rest of the world.
The problem with those catalysts is that the latter two are minor components of platinum and copper/nickel ores and despite how expensive they are, the extraction is only economically viable as part of other mining. Their supply can only grow as much as platinum extraction allows and demand is already pretty significant with environmental regulations often necessitating their use. Any more demand for them will cause their prices to rise dramatically and its a long way before they become profitable enough to mine on their own (flooding the platinum market in the process which has much higher yields from the ores).
it depends on the scale and the required amounts. If having a limited amount of catalyst wasn't such a big problem I suspect hydrogen power would have been much more economically viable.
Activated carbon filtering removes up to about 75% of PFAS. Reverse-osmosis removes almost all.
Doesn't get rid of them, to be clear. It would still be better if a way could be found to chemically (and cheaply) convert them to something less harmful.
> Activated carbon filtering removes up to about 75% of PFAS
Common inexpensive non-RO filter systems come with independent test results showing 99% removal of PFOA/PFOS (see e.g https://www.brondell.com/content/UC300_Coral_PDS.pdf). Do we have reason to believe that other PFAS don't filter as easily?
In the web sphere, I recall Amazon having done something like this in the very early days when there was a sidebar with categories that you could kinda drill into. Mouseover one, and there was an invisible triangle off to the right that if you kept inside of, it wouldn't switch the current category.
I guess my major question would be: does the training data include anything from 2025 which may have included information about the IMO 2025?
Given that AI companies are constantly trying to slurp up any and all data online, if the model was derived from existing work, it's maybe less impressive than at first glance. If present-day model does well at IMO 2026, that would be nice.
Yeah, the "2% growth forever" feels like a sneaky addition which is extremely controversial in economic theory: if endless growth is required. 1.02 ** 1000 ~= 400,000,000. So if the world population continued to grow at 2% in those same 1000 years, there'd be 2.8 quintillion people. Evenly distributed over the planet (water included), each person would get a square 1.35 centimeters on a side.
Lifecycle analysis is a common and increasingly detailed field which includes impacts to manufacture, transport, install, run, and clean-up installations, either cradle-to-grave, or cradle-to-cradle (includes the cost of recycling). I assume for installations like this, those studies have been done.
There's a whole tirade in "Landman" about wind turbines not being green because of this or that thing[0], ending with the statement: "in its 20-year lifespan, it won't offset the carbon footprint of making it". These are just feelings (of the fictional character, but unfortunately ones adopted by real people) that are unconcerned with the facts that, no, the lifecycle analysis shows that wind turbines break even in 1.8 to 22.5 months, with an average of 5.3 months[1].
And I'm not qualified to say the tidal based solutions will never beat out Geo/Solar/Win + Batteries. In my informed but non-professional opinion, it seems like this avenue will never ever work at scale.
From everything I've seen, we have the answer, we're just stuck under the boot of old money oil barons. Solar + wind + geo (depending on the geographic area) for the majority of our power generation. Nuclear + batteries to smooth out the duck curve form the bottom, paired with more aggressive demand pricing & thermal regulations to smooth it out from the top. That's the answer. But lobbyist's going to lobby.
Yep, lifecycle analysis is the key lens we should be using when evaluating any energy technology, especially in emotionally charged debates about what’s "green" or not
Maybe these are the windmills that drive the whales crazy? To paraphrase wind-watch.org (sounds non-partisan)
> The obvious concern that most people might guess will be dangerous and damaging to [swimming] wildlife are the spinning blades themselves. While large white spinning [turbine] blades rotating [below] the horizon or in an advertisement seem bucolic, restive, and like the perfect green energy source, the fact is that the tips of the blades can be spinning at up to 200 miles per hour. Those speeding blades can act like a giant blender for large [fish] such as [tuna] and [whales] which fly around the commercial [water] turbines and chop those [fish] up. Biologists have found that even small species of American [fish] regularly get killed from the spinning turbines of commercial [water] turbines.
My intuition is that these will be moving much more slowly than that. The turbines that they refer too are usually high pressure ones designed for generating energy downstream of a huge body of water like a resovoir.
These turbines have a diameter of 18m and a speed of 8 to 20 rpm. So a tip speed of 7.5 to 19 m/s - about 27 kph to 68 kph. I guess that's enough to hurt a whale. Although interestingly the water speed due to the tide in this channel is up to 5 m/s - so maybe it's too turbulent for whales anyway. Do whales like fast flowing water?
In non-phoenitic languages, i.e. English, many of these methods are painful, especially "Last is First". See "I", but then it's "In", so you need to mentally backtrack some understanding. See "t", but then it's "that", so if you're subvocalizing to read, you need to reform the phoneme because 't' is a different phoneme from 'th'.
I think in casual speech at this point (at least in my experience) the two are used interchangeably. In professional or legal settings I'm sure the distinction matters more, but I feel like OP's usage here felt pretty natural to me even though it's not technically correct.
Well, the thing is… when you use a borrowed term from a dead language, in writing, it really sounds wrong to cultivated ears. I really had to double-check that sentence to see if I had parsed it wrongly. Not bragging, just saying.
They cannot be completely interchangeable:
“There are white people among us: i.e. me and my father” is totally different from “…: e.g. me and my father”.
Isn't reading more like pattern recognition than parsing letter-for-letter? It seems to work like that for me. There's also the somewhat famous text where each word's letters are jumbled and people can still read it fluently. Maybe that's not the case for everyone, though, and people have different ways of making sense of written text.
I once attended a short workshop where the person presenting encouraged us to switch between two modes of reading away from sub-vocalizing and into pattern recognition. The result was much faster reading without loss of understanding.
He didn't use those terms but adopting them from this thread - I learned that day that these really are two distinct modes.
English is phonetic? The writing systems aren't regular in that the same letter can represent different sounds. But they still represent sounds. Indeed, your confusion wouldn't even be possible if they didn't represent sounds.
A short word like "that" is read at once, especially because it's common. So no need to backtrack.
A less common word like "phoenitic" or "subvocalizing" is read as you say. However by the end of the sentence we know how to read "phoneme" because we encountered it 3 times in one form or the other.
Former company enforced OpenAPI specs to be able to publish any API endpoint, devs just wanted to push code, so they made vague shit specs because it's pulling teeth to do it the right way (didn't help that the spec enforcer couldn't fully read YAML documents with references, so copypasta was rampant)...
I guess there's the endless cycle of 1) a format is created, 2) the format evolves to do more, 3) winds up being overbearing, 4) a new format is created...