I'm generally pro nuclear and think it should be a significant part of the energy mix. But an interesting point is that the floor price of nuclear is the cost of the turbines, which are surprisingly expensive, and aren't really getting cheaper. Solar can go much cheaper than that, potentially at least. More on this point in the recent Casey Handmer interview of Dwarkesh.
I largely agree. As a counterpoint, today I delivered a significant PR that was accepted easily by the lead dev with the following approach:
1. Create a branch and vibe code a solution until it works (I'm using codex cli)
2. Open new PR and slowly write the real PR myself using the vibe code as a reference, but cross referencing against existing code.
This involved a fair few concepts that were new to me, but had precedent in the existing code. Overall I think my solution was delivered faster and of at least the same quality as if I'd written it all by hand.
I think its disrespectful to PR a solution you don't understand yourself. But this process feels similar to my previous non-AI assisted approach where I would often code spaghetti until the feature worked, and then start again and do it 'properly' once I knew the rough shape of the solution
The best way I’ve found to use LLM’s for writing anything that matters is, after feeding it the right context, take its output and then retype it in your own words. Then the LLM has helped capture your brain dump and organize it but by forcing yourself to write it rather than copy and paste… you get to make it your own. This technique has worked quite well with domains I’m not the best at yet like marketing copy. I want my shit to have my own voice but I’m not sure what to cover… so let the LLM help me with what to cover and then I can rewrite its work.
One interesting point made here is that the cost of turbines puts a floor price on any form.of generation which uses them, whether renewable or not, meaning in the long run solar has a big advantage: https://www.dwarkesh.com/p/casey-handmer. I don't know how accurate that is
100% agree. I've also worked as a data engineer and came to the same conclusion. I wrote up a blog which went into a bit more depth on the topic here: https://www.robinlinacre.com/recommend_sql/
I remember when that made the rounds on HN, it is one of the earliest examples of AI-generated classifications/summaries. I used to show braggoscope as an example in many talks... before vector databases, agents, etc.
Comparing to the UK probably isn't the best though since the UK latitude makes it not super favourable to Solar. It would be better to compare it to Southern Europe.
Spain has 40GW and GDP that's about 1/10th of China. Still, dividing China's capacity of 90GW by 10 still means they built a quarter of Spain's capacity in a month. Crazy.
you know what's really fun is that the value in US dollars of all that solar energy market in china was only about 2.5 times higher than the value of the solar market in the US in 2024 (despite total capacity and newly installed capacity in china both being about 7x)
we have one, it's called MPS and it was used by the soviets and most of the communist countries including china until the 1990s. china has still not fully transitioned away from MPS and into SNA which is one reason their service sector share of GDP seems so impossibly low
I mean, it's a difference of policy; spend on solar rollout isn't a significant part of either country's GDP.
GDP PPP is probably the more appropriate comparison here, by the way (a big part of the cost of solar isn't buying the actual panels), and China's GDP PPP is 10x the UK's.
I think this may be a databricks thing? From what I've seen there's a gap between data engineers forced to use databricks and everyone else. From what I've seen, at least how it's used in practice, databricks seems to result in a mess of notebooks with poor dependency and version management.
Interesting, databricks has been my first exposure to DE at scale and it does seem to solve many problems (even though it sounds like it's causing some). So what does everyone else do? Run spark etc. themselves?
tbh I see just as much notebook-hell outside of dbx, it's certainly not contained to just them. There's some folks doing good SDLC with Spark jobs in java/scala, but I've never found it to be overly common, I see "dump it on the shared drive" equally as much lol. IME data has always been a bit behind in this area
personally you couldn't pay me to run Spark myself these days (and I used to work for the biggest Hadoop vendor in the mid 2010s doing a lot of Spark!)
We use aws glue for spark (but are increasingly moving towards duckdb because it's faster for our workloads and easier to test and deploy).
For Spark, glue works quite well. We use it as 'spark as a service', keeping our code as close to vanilla pyspark as possible. This leaves us free to write our code in normal python files, write our own (tested) libraries which are used in our jobs, use GitHub for version control and ci and so on
130,000 car thefts a year. That's over £1bn loss, probably closer to £4bn. In this context the total police budget of around £20bn seems remarkably low!
You'd have thought it'd be worth insurance companies paying people to track down the thieves!
> Why bother tracking down thieves when you can just keep jacking up premiums? It's not like customers have a choice.
Insurance companies compete viciously with each other on price. Have you not seen their ads? If one could offer significantly cheaper insurance through some mechanism like that they definitely would.
They can choose to not pay for theft coverage. Different insurers compete but obviously trying to improve theft prevention would have to be a collaboration as it wouldn’t be able to lower only one insurer’s premiums.
A mere six or seven figures spend on lobbying and/or wining and dining the legislature will plug that loophole.
The legislature will spew some grandiose bullshit about how in an effort to reduce the cost theft coverage is now one of the mandatory parts and half of the population will eat it up.
>would have to be a collaboration as it wouldn’t be able to lower only one
Which will never happen because reducing costs across the board is bad for insurers. Even if their margins are thin a thin margin on a big number gets you more money. They don't care how high costs go as long as they're uniformly distributed and/or predictable.
reply