Isn't there a sweet spot where solar is too much of your energy mix -- due to its intermittency? I think I read that once you get to like 40%, you need to spend a lot more on storage.
Is the EU also ramping up (battery?) storage? Or are they getting near the max of what they can do with solar? (Or do I have it all wrong :/ )
If you're in the Atacama Desert, I doubt it's 40%, but not really relevant.
This is ALL renewables, not just Solar - the article states that Solar is ~20% now in the EU.
Wind typically counts for ~15%, and Hydro (which may or may not be counted as renewable) counts as ~15%.
So most places can pretty easily get to ~40% solar, ~15% wind, ~15% hydro = ~70% renewable.
Throw in ~20% Nuclear (basically all of Europe before Germany sh*t the bed), and you're at ~90% - with limited need for storage - a large portion of which could come from infra that already exists for pumped hydro and regular overnight solar storage.
We're quite a ways away from diminishing returns.
We're ~8 years away from a global ~40% of electricity coming from solar EVEN IF it continues to grow at ~30% YoY.
You mean low water levels? Isn’t it caused by agriculture water use? A dam allows to use more water (for agriculture) but one can choose not to use more.
>I think I read that once you get to like 40%, you need to spend a lot more on storage.
You can get pretty high before the economics get sketchy. Below analysis concluded that for many sunny places that point is in the 90%+. Most of EU will be lower than said sunny places, but point is it's not 40%. And the sprinkling of wind, nuclear, geo, hydro means there is a fair bit of room to still push.
Plus both solar and storage tech is still moving rapidly
I don't know of any specific thresholds, but it's worth mentioning that 54% of Q2 was renewable, and solar peaks in Q2. Solar was also only 36.8% of that renewable generation (just under 20% of Q2's total), so there's a long way to go before solar is 40% of the total energy mix.
If there is an important threshold when solar reaches 40% of the full year's production, then solar will need to almost quadruple before that's a concern. For all of 2024, solar was 22.4% of renewables, and renewables were 47% of the total[1], meaning that solar was 10.5% of total electricity over the full year.
The issue is not sorely about negative price. It’s about keeping base capacity profitable so the grid doesn’t collapse.
The energy strategy of the EU was hopeless for a long time and is only marginally better now. It’s not as braindead as the monetary union but close. Germany was actively sabotaging France for a long time while having to restart coal power plants and investing in gas fuelled capacity.
Sadly the union is heavily unbalanced since the UK left.
I’m French. Unless something major changes, I hope we will be out before the UK comes back. I don’t see how anyone can be in favour of the EU after the Greek debt crisis.
I’m not too surprised about my original comment being downvoted while being entirely factually true. It was a bit much from me to expect people to understand the underside of running too much intermittent energy sources and how this is currently dealt with (the braindead part). I invite the champions of solar to explain to me the current plan of the EU for actually running the whole grid past 2050 while phasing out the coal and gas (hint: there is none).
Anyway I invite everyone to take a look at what the EU used to do nuclear, how it was purposefully omitted from the definition of clean energy for years, how they used to fine France despite its energy being clean, how it forces the French energy operator to sell at a loss, how it impedes France properly managing its dams and then look at who actually pushed for these policies while buying Russian gas and burning coal. The whole thing is a complete joke. At least they apparently saw the light on nuclear. That’s a start.
Different situation. France has only itself to blame for the current situation and has plenty of things it can still do to avoid a crisis. Plus the debt holders are very diversified.
The Greek crisis is very different because the debt was mostly held by German banks - the German did to do something of all these excess savings and the Greek economy suffered a lot from the euro. Reforms were needed but the way the whole thing was handled is a disgrace.
I frequently see this sentiment but it's not really very informed or informative. Greek creditors took huge haircuts on the debt - that never gets mentioned.
Then, Greece had a choice, just like every other country always has a choice when faced with a debt crisis: Accept the terms of people who will bail you out, or default. It is always like this, because no-one is going to bail you out if it is apparent that the situation will just repeat. The Greek people voted to reject the terms of the bailouts, which meant leaving the Euro, printing their own currency, and accepting that the global capital markets would not be buying their bonds for the foreseeable future. The Greek government saw the choice and ignored the people, because they knew the alternative to the bailout was far, far worse. The only reason they ran that referendum was to try use it to bargain for better terms, they never had any intention of defaulting.
Hard to tell. We could devaluate. That would help with both the debt situation and our exports. The UK is not doing that bad at all.
That’s a risky bet but I personally prefer that to the current situation. I would honestly be ok with staying in the union if we could exit the euro while staying but I don’t think it’s possible.
> Hard to tell. We could devaluate. That would help with both the debt situation and our exports. The UK is not doing that bad at all.
1. It is widely recognised, and irritates the poorer countries, that Germany and France benefit from the relatively weak Euro for their exports.
2. If you left the EU and obviously the Euro and devalued your currency, you end up with wild inflation. If there is anything a population hates, it is high inflation. Given your political situation in France currently, no government would last a year in that scenario.
3. Devaluing obviously causes bond rates to rocket, which means rolling over your current debt because extremely serious.
4. On top of that, mortgage rates rocket, people hate that too.
There is a reason countries don't just print money and devalue all the time. If it was that easy, everyone would do it.
> That would help with both the debt situation and our exports.
But the debt is denominated in euro. If you leave euro and create a new currency, the bonds will still be in euro, while your new currency will be worth much less. It's basically easier and less painful to default on the spot than to go through this.
I honestly can't understand the urge to get bankrupt. France is already famous for the revolution and eagerness to protest but this looks like you want to cause chaos just for the sake of it.
It’s me being dramatic for useless flair. I edited it out a minute after posting because it adds nothing to the discussion but you read it before I did.
It’s a monetary union with no common fiscal policies and no mechanism to correct disparity between members. Complete train wreck since it has been put in place.
Germany has been abusing it from the start running huge trade surplus, compressing salaries, using its excess savings to buy foreign debts instead of investing and being shielded from monetary appreciation by the consumption and investments of other countries. The euro is basically Germany robbing blind the other members while pretending to be virtuous and blocking most of what could have improved the situation.
> and no mechanism to correct disparity between members
AFAIK, they created some mechanisms after the 2008 crisis. Every country there now effectively prints money in differing rates, and the EU only regulates some limits.
During winter, France uses ~50% more electricity per day than during summer.
And during cloudy days in winter, solar produces 10%-15% what it produces during summer.
If you don't have month-long battery storage, in order to be fully solar based France would need to produce 20 times more electricity than needed during summer.
> And during cloudy days in winter, solar produces 10%-15% what it produces during summer.
This doesn't matter. If you look at the monthly stats, solar panels in France produce ~3x more in the summer than the winter at a month by month view. As such, you only need 3x extra overall, and some day to day storage.
Or you use a different technology optimized for long term storage. Batteries are not that technology. Hydrogen (or other e-fuels) or long term thermal storage.
> Or you use a different technology optimized for long term storage. Batteries are not that technology
I've heard this before but can you explain why? A cursory web search tells me batteries hold charge pretty well for 6 months. And the new sodium batteries from CATL are certainly cheap enough.
For long term storage, capex is king, not round trip efficiency. The capex of batteries ($ per kWh of storage) is much too high. There aren't enough charge/discharge cycles to amortize that capex. This is unlike with diurnal storage, where there are many thousands of cycles over which to spread that cost.
not really. At this point, solar is basically free, and having extra free energy has all sorts of benefits. For the EU, in particular, it greatly reduces their dependence on Russian oil and gas. if all you do with extra solar is replace 2 extra hours a day of natural gas consumption, you effectively make yourself have 12% more storage, which decreases Russian leverage.
in EU: gas, oil is still 60% of usage. You are not going to heat you home during winter with electricity anytime soon, same like we are not all gonna drive electric cars this decade.
Heat pumps account for 2/3 of new heating installations in Germany [1].
Modern buildings with effective insulation seem to make them quite viable, but that hinges on the availability of attractive electricity prices.
The second factor is that carbon-based fuels may become more expensive over time, so perhaps electricity costs “just” needs to remain stable to become attractive.
I'm sorry but this thread does not talk about using PV to heat your home in the winter. But it is absolutely possible to use electricity to heat homes, it's widely used in northern countries. And the nice thing about electricity is that it can be generated in one place and used in another.
This thread is talking about reduction of dependence on oil&gas supplied by various nefarious regimes, though. Still quite a challenge in the winter, with barely any sun out there.
"it can be generated in one place and used in another."
It can, but we are far from having such a robust grid all across the continent. I am not even sure if we are getting closer. Both economic and political aspects come into play, which might be harder to address than the purely technical ones.
For example, France really does not want cheap Spanish solar energy to flood the French market, hence the inadequate connection over the Pyrenees.
Everyone knows that, including the European Commission, but France is one of the two really big continental players who can do anything they want and cannot be effectively punished. The "everyone is equal, but some are more equal" principle.
If you mean getting rid of oil and gas on a short scale, there won't be majority for that. By 2040 or 2050 maybe, with some significant exceptions (I don't believe in large electric jets; small aircraft maybe).
Solar and wind are extremely reliable, because they are distributed. Unlike large-scalec fossil or nuclear, a single plant going offline isn't a big deal.
Solar and wind are quite predictable - it's just weather forecasting. We have a pretty good idea what it is going to produce 7 days from now - we just can't control it.
Solar and wind can provide rotational mass. Existing installations just aren't engineered for it, because grid following makes more sense in a fossil-heavy grid. If extra inertia is needed, batteries are the perfect source for it, as it can instantly scale from -100 to +100 to soak up excess or fill in shortages. Or we can just install a bunch of flywheels, no big deal.
> This falls into the endless list of "remember you voted for this".
I think, to be fair, there are only two parties. You can only vote for one "package" of policies. Maybe you are a one-issue voter or maybe you weigh all the different positions of the candidates to find the one who aligns most with you.
I don't think it is accurate to say that all the people who voted for Trump approve of any individual policies -- like this one. So they are allowed to be as upset as anyone else about this stuff.
I'll upvote you for at least laying out a coherent argument.
But you're right. A lot of voters weighed the set of policies and decided that mass deportations, suspension of due process, tarifing imports to raise domestic prices, slashing federal agencies like the EPA and CDC, muzzling free speech at universities and on TV, grifting for personal gain at every opportunity, all of which was explicitly spoken about during the campaign, was ok to get their single-issue promoted.
So yeah, lots of people voted for this. For all I know those same people think it's going just fine.
Which brings us back to, "just remember, you voted for this."
I don't know, man. Did Biden voters vote for high inflation and open borders? Did they vote for 50% tariffs on solar panels? I was a Biden voter and I didn't.
Biden did not have magic powers to preempt the runoff effects from the pandemic. (If anything, the actions of his administration helped get inflation under control by the end of his term.) And to claim the borders were “open” is just propaganda.
> So they are allowed to be as upset as anyone else about this stuff.
My issue with this take is the lack of evidence. I don't read about Trump voters calling their reps to try and push back (and let's be real, most Republican members of Congress these days would likely dismiss such constituents as being "paid protestors" or something). I haven't seen Trump voters out protesting against some part of the administration's policy. I _have_ seen anonymous reddit users in a Trump related subreddit say, "yeah, don't agree with this at all". Which leaves me wondering: how upset are they, really?
It's interesting how much "they are a private company, they can do what they want" was the talking point around that time. And then Musk bought Twitter and people accuse him of using it to swing the election or whatever.
Even today, I was listening to NPR talk about the potential TikTok deal and the commenter was wringing their hands about having a "rich guy" like Larry Ellison control the content.
I don't know exactly what the right answer is. But given their reach -- and the fact that a lot of these companies are near monopolies -- I think we should at least do more than just shrug and say, "they can do what they want."
I wouldn't mind a python flavor that has a syntax for tensors/matrices that was a bit less bolted on in parts vs Matlab. You get used to python and numpy's quirks it but it is a bit jarring at first.
Octave has a very nice syntax (it's an extended Matlab's syntax to provide the good parts of numpy broadcasting). I assume Julia uses something very similar to that. I have wanted to work with Julia but it's so frustrating to have to build so much of the non-interesting stuff that just exists in python. And back when I looked into it there didn't seem to be an easy way to just plug Julia into python things and incrementally move over. Like you couldn't swap the numerics and keep with matplotlib things you already had. You had to go learn Julia's ways of plotting and doing everything. It would have been nice if there were an incremental approach.
One thing I am on the fence about is indexing with '()' vs '[]'. In Matlab both function calls and indexing use '()' which is a Fortran style (the ambiguity lets you swap functions for matrices to reduce memory use but that's all possible with '[]' in python) which can sometimes be nice. Anyway if you have something like mojo you're wanting to work directly with indices again and I haven't done that in a long time.
Ultimately I don't think anyone would care if mojo and python just play nicely together with minimal friction. (Think: "hey run this mojo code on these numpy blobs"). If I can build GUIs and interact with the OS and parse files and the interact with web in python to prep data while simultaneously crunching in mojo that seems wonderful.
I just hate that Julia requires immediately learning all the dumb crap that doesn't matter to me. Although it's seeming like LLM seem very good at the dumb crap so some sort of LLM translation for the dumb crap could be another option.
In summary: all mojo actually needs is to be better than numba and cython type things with performance that at least matches C++ and Fortran and the GPU libraries. Once that happens then things like the mojo version of pandas will be developed (and will replace things like polars)
The syntax is based on python, but its runtime is not. So nothing about the contrast between the python language and mojo's use as a super-parallelized parallel processing system is inconsistent.
I think the only Zig hype I'm seeing is about its compiler and compatibility. Those might well be the same two reasons why you never hear about modula-2.
Man, if I get to a point where I have a terminal disease and am in pain — or on a path to losing my mind — I sure hope I have an option to decide on when and how I go.
Forcing other people to live hopeless lives in pain for your own morality is evil, in my opinion.
Plus there is another consideration, probably more so in the US.
If you are terminal, do you want all your savings to go to health care, or would you want to ensure you children or spouse get what you had saved over the years.
To me, if terminal and needing lots of healthcare, I think you are better of tossing in the towel. Trying to stick it out could put your family into poverty.
I don’t know the reason, nor do I have proof of anything. But: to me this is a great time to consider Occam’s Razor.
Executives seem to (mostly) universally want people to RTO. Why would they?
They obviously have lots of data. If it was bad for productivity, why would they do it?
Answers seem to be things like “power trip” or “need to justify real estate”. I’m pretty sure most companies would save money by giving up their leases. Maybe they are all having power trips, but irrational behavior from leaders won’t win out in the long run.
My observation from my time is that, likely: some people are really good at getting stuff done at home. But most probably get less done. And I suspect the leaders find this in their data.
I'm sympathetic to your reasoning, and I actually think your conclusion about how people wfh is probably true (a minority finding it wildly more productive) but the problem I see is that most execs don't understand what a lot of their employees do on a day to day basis, nor do I think they could properly understand the data even if they had it as a result. And, if they did have it, why don't they show it if the conclusions are so self evident?
In reality I think it's much simpler: the work that executives (and some upper level managers) do relies on a certain amount of theater. They need to be seen (in person) and they to be seen doing work (in person). That's part of the "deal" with being an exec --- you need to be able to act the part. Then, they just assume that's how it should or needs to be for everyone else.
The interesting thing is that "Darwinism" will sort it out for us in the long run. If the execs are right: more work gets done in the office, those companies will do better; if people are happier and more productive at home, those companies will tend to do better.
The funny thing about bubbles is that they are impossible to predict. My sense is that they always go on longer than is at all rational. (Assuming this is a bubble, who knows.)
I’m still waiting for the crypto “bubble” to burst. Seems like we are years overdue ;)
If I'm doing the math right, you can get ~4000 queries per kWh. A quick Google search gave $0.04 per kWh when bought in bulk. So you can get around 100k queries per dollar of electricity.
That's.... a lot cheaper than I would have guessed. Obviously, the data centers cost quite a bit to build. But when you think of $20/mo for a typical subscription. That's not bad?
The fundamental flaw in AI energy/water doomerism has always been that energy costs money, water costs money, real estate costs money, but AI is being given away for free. There is obviously something wrong with the suggestion that AI is using all the energy and water.
It would be interesting to know how many queries they get per "base" model. I honestly have no idea what the scale is. In any case, that's the only way to know how to amortize training cost over inference.
There is known to be a number of superficially compelling proofs of the theorem that are incorrect. It has been conjectured that the reason why we don't have Fermat's proof anywhere is that between him writing the margin note and some hypothetical later recording of the supposed proof, he realized his simple proof was incorrect. And of course, saw no reason to "correct the historical record" for a simple margin annotation. This seems especially likely to me in light of the fact he published a proof for the case where n = 4, which means he had time to chew on the matter.
Or, maybe he had a sense of humor, and made his margin annotation knowing full well that this would cause a lot of headscratching. It may well be the first recorded version of nerdsniping.
More likely he decided to leave it in as a nerdsnipe rather than he wrote it in the first place as a nerdsnipe (seems more likely he thought he had it?)
Among well-known mathematicians, Gabriel Lamé claimed a proof in 1847 that was assuming unique factorization in cyclotomic fields.
This was not obvious at the time, and in fact, Ernst Kummer had discovered the assumption to be false some years before (unbeknownst to Lamé) and laid down foundations of algebraic number theory to investigate the issue.
Fermat lived for nearly three decades after writing that note about the marvelous proof. It's not as if he never got a chance to write it down. So it sure wasn't his "last theorem" -- later ones include proving the specific case of n=4.
There are many invalid proofs of the theorem, some of whose flaws are not at all obvious. It is practically certain that Fermat had one of those in mind when he scrawled his note. He realized that and abandoned it, never mentioning it again (or correcting the note he scrawled in the margin).
Yeah, I just figured out how to simply reconcile general relativity and quantum mechanics, but I am writing on my phone and it's too tedius to write here.
import FLT
theorem PNat.pow_add_pow_ne_pow
(x y z : ℕ+)
(n : ℕ) (hn : n > 2) :
x^n + y^n ≠ z^n := PNat.pow_add_pow_ne_pow_of_FermatLastTheorem FLT.Wiles_Taylor_Wiles x y z n hn
“Fermat usually did not write down proofs of his claims, and he did not provide a proof of this statement. The first proof was found by Euler after much effort and is based on infinite descent. He announced it in two letters to Goldbach, on May 6, 1747 and on April 12, 1749; he published the detailed proof in two articles (between 1752 and 1755)
[…]
Zagier presented a non-constructive one-sentence proof in 1990“
It is simply an obvious fault line in the nature of the problem statement: you can crack the problem in 2 parts: the x^4+y^4=z^4 part, and the part that claims x^p+y^p=z^p with p a prime.
Suppose Fermat solved the proof by using this natural fault line -its just how this cookie crumbles- solved the n=4 case, and then smashed his head a thousand times against the problem and finally found the prime n proof.
He challenges the community, and since they don't take up the challenge, "encourages" them in a manner that may be described as trollish, by showing how to do the n=4 case. (knowing full well the prime power case proof looks totally different)
That's an interesting take but I think it's unlikely for two reasons:
1. In any case you view it, it's not trivial, which was the statement in the note. If it were, the effort to publish just for n=4 would be silly, because it would take equal effort to just publish for general case. That he withheld the proof just to mess with people is highly unlikely.
2. I definitely do not make private notes in my books just so that maybe somebody later on would pick up that book and wonder whether I had indeed discovered the secrets of the universe. I definitely do not write "challenges to the community" there.
It's possible we never found the one he had, but it's pretty unlikely given how many brilliant people have beaten their head against this. "Wrong or joking" is much more likely.
I feel like there’s an interesting follow-up problem which is: what’s the shortest possible proof of FLT in ZFC (or perhaps even a weaker theory like PA or EFA since it’s a Π^0_1 sentence)?
Would love to know whether (in principle obviously) the shortest proof of FLT actually could fit in a notebook margin. Since we have an upper bound, only a finite number of proof candidates to check to find the lower bound :)
Even super simple results like uniqueness of prime factorisation can pages of foundational mathematics to formalise rigorously. The Principia Mathematica famously takes entire chapters to talk about natural numbers (although it's not ZFC, to be fair). For that reason I think it's pretty unlikely.
Thanks. So if I read this correctly - there is consensus that Wiles' proof can be reduced to ZFC and PA (and maybe even much weaker theories). But as presented Wiles proof relies on Grothendieck's works and Grothendieck could not care less about foundationalism, so no such reduction is known and we don't really have a lower bound even for ZFC.
> In the words of mathematical historian Howard Eves, "Fermat's Last Theorem has the peculiar distinction of being the mathematical problem for which the greatest number of incorrect proofs have been published."
This is actually way false. Rigorous mathematical proof goes back to at least 300 BCE with Euclid's elements.
Fermat lived before the synthesis of calculus. People often talk about the period between the initial synthesis of calculus (around the time Fermat died) and the arrival of epsilon-delta proofs (around 200 years later) as being a kind of rigor gap in calculus.
But the infinitesimal methods used before epsilon-delta have been redeemed by the work on nonstandard analysis. And you occasionally hear other stories that can often be attributed to older mathematicians using a different definition of limit or integral etc than we typically use.
There were some periods and schools where rigor was taken more seriously than others, but the 1600s definitely do not predate the existence of mathematical rigor.
It is possible to discover mathematical relation haphazardly, in the style of a numerologist, just by noticing patterns, there are gradations of rigor.
One could argue, being a lawyer put Fermat in the more rigorous bracket of contemporary mathematicians at least.
There's ways to reduce attack surface short of tearing out support. Such as, for instance, taking one of those alleged JS polyfills and plugging it into the browser, in place of all the C++. But if attack surface is your sole concern, then one of those options sounds much easier than the other, and also ever-so-slightly superior.
In any case, there's no limit on how far one can disregard compatibility in the name of security. Just look at the situation on Apple OSes, where developers are kept on a constant treadmill to update their programs to the latest APIs. I'd rather not have everything trend in that direction, even if it means keeping shims and polyfills that aren't totally necessary for modern users.
It is a balance (compatibility vs attach surfaces). The issue with XSLT (which I am still a strong advocate for) is that nobody is maintaining that code. So vulnerabilities sit there undetected. Like the relatively recent discovery of the xsl:document vulnerability.
> It is a balance (compatibility vs attach surfaces).
What I'm trying to say is that it's a false dichotomy in most cases: implementations could almost eliminate the attack surface while maintaining the same functionality, and without devoting any more ongoing effort. Such as, for instance, JS polyfills, or WASM blobs, which could be subjected to the usual security boundaries no matter how bug-ridden and ill-maintained they are internally.
But removing the functionality is often seen as the more expedient option, and so that's what gets picked.
Sure, but this requires someone sitting down and writing the JS polyfill, and then maintaining it indefinitely. And for something as complicated as XSLT, that will surely be indefinite maintenance, because complicated specs beget complicated implementations.
In the absence of anyone raring to do that, removal seems the more sensible option.
The vendor discussion on removing XSLT is predicated on someone creating a polyfill for users to move to. It is not an unreasonable assumption because a polyfill can be created fairly trivially by compiling the existing XSLT processor to WASM.
And it is also fairly trivial to put that polyfill into the browsers.
The Chrome team has been moaning about XSLT for a decade. If security was really their concern they could have replaced the implementation with asm.js a decade ago, just as they did for pdfs.
> Sure, but this requires [...] maintaining it indefinitely.
Does it, though? Browsers already have existing XSLT stacks, which have somehow gotten by practically unmodified for the last 20 years. The basic XSLT 1.0 functionality never changes, and the links between the XSLT code and the rest of the codebase rarely change, so I find it hard to believe that slapping it into a sandbox would suddenly turn it into a persistent time sink.
Wasn't this whole discussion sparked by a fairly significant bug in the libxslt implementation? There's also a comment from a Chrome developer somewhere in this thread talking about regularly trying to fix things in libxslt, and how difficult that was because of how the library is structured.
So it is currently a persistent time sync, and rewriting it so that it can sit inside the browser sandbox will probably add a significant amount of work in its own right. If that's work that nobody wants to do, then it's difficult to see what your solution actually is.
The current problem is that bugs in libxslt can have big security implications, so putting it or an equivalent XSLT 1.0 processor in a safe sandbox would make active maintenance far less urgent, since the worst-case scenario would just be presentation issues.
As for immediate work, some in this thread have proposed compiling libxslt to WASM and using that, which sounds perfectly viable to me, if inefficient. WASM toolchains have progressed far enough that very few changes are needed to a C/C++ codebase to get it to compile and run properly, so all that's left is to set up the entry points.
(And if there really were no one-for-one replacement short of a massive labor effort, then current XSLT users would be left with no simple alternative at all, which would make this decision all the worse.)
Is the EU also ramping up (battery?) storage? Or are they getting near the max of what they can do with solar? (Or do I have it all wrong :/ )
reply