In a manner of speaking, the grid is already the storage mechanism. In summer you sell the excess to the grid; in winter you buy it back. Obviously you pay more to buy than you get for selling but that's the premium for using someone else's infrastructure. You'd spend a load more buying a battery the size of a small house.
Bear in mind this is a right wing newspaper with an anti-migrant bias misrepresenting the story for its audience. The police are looking for signs of violence being planned; nothing to do with free speech. An alternative view : https://www.theguardian.com/uk-news/2025/jul/27/police-unit-...
But UK police do routinely arrest and jail people for social media posts that would classify as free speech in sane countries, including skepticism of mass migration.
It's nice if you enjoy coding, I agree. I don't enjoy it, so I just have Claude do everything. I have to review all the code it writes, of course, or at the very least the function signatures and general architecture, but I can't be arsed looking up API/library docs myself ever again.
Only at first glance. It can easily render things that would be very hard to implement in an FPS engine.
What AI can dream up in milliseconds could take hundreds of human hours to encode using traditional tech (meshes, shaders, ray tracing, animation, logic scripts, etc.), and it still wouldn't look as natural and smooth as AI renderings — I refer to the latest developments in video synthesis like Google's Veo 3. Imagine it as a game engine running in real time.
Why do you think this is so hard, even for technical people here, to make the inductive leap on this one? Is it that close to magic? The AI is rendering pillars and also determining collision detection on it. As in, no one went in there and selected a bunch of pillars and marked it as a barrier. That means in the long run, I'll be able to take some video or pictures of the real world and have it be game level.
Because that's been a thing for years already - and works way better then this research does.
Unreal engine 5 has been demoing these features for a while now, I heard about it early 2020 iirc, but the techniques like gaussian splattering predate it.
I have no experience in either of these, but I believe MegaScans and RealityCapture are two examples doing this. And the last nanite demo touched on it, too.
I'm sorry, what's a thing? Unreal engine 5 does those things with machine learning? Imagine someone shows me Claude generating a full React app, and I say "well you see, React apps have always been a thing". The thing we're talking about is AI, nothing else. There is no other thing is the whole point of the AI hype.
What they meant is that 3D scanning real places, and translating them into 3D worlds with collision already exists, and provides much, much better results than the AI videos here. Additionally, it does not need what is likely hours of random footage wandering in the space, just a few minutes of scans.
I think an actual 3D engine with AI that can make new high quality 3D models and environments on the fly would be the pinnacle. And maybe even add new game and control mechanics on the fly.
reply