Hacker Newsnew | past | comments | ask | show | jobs | submit | landryraccoon's commentslogin

Their electricity costs are $10K per month or about $120K per year. At an interest rate of 7% that's $1.7M of capital tied up in power bills.

At that rate I wonder if it makes sense to do a massive solar panel and battery installation. They're already hosting all of their compute and storage on prem, so why not bring electricity generation on prem as well?


At 120K per year over the three year accounting life of the hardware, that's 360k... how do you get to 1.7M?


It seems unlikely to me that they'll never have to retrain their model to account for new data. Is the assumption that their power usage drastically drops after 3 years?

Unless they go out of business in 3 years that seems unlikely to me. Is this a one-off model where they train once and it never needs to be updated?


Let's just say we're not seeing all of these sudden private nuclear reactor investments for no reason.


That's moving the goalposts.

ChatGPT would easily have passed any test in 1995 that programmers / philosophers would have set for AGI at that time. There was definitely no assumption that a computer would need to equal humans in manual dexterity tests to be considered intelligent.

We've basically redefined AGI in a human centric way so that we don't have to say ChatGPT is AGI.


Any test?? It's failing plenty of tests not of intelligence, but of... let's call it not-entirely-dumbness. Like counting letters in words. Frontier models (like Gemini 2.5 pro) are frequently producing answers where one sentence is directly contradicted by another sentence in the same response. Also check out the ARC suite of problems easily solved by most humans but difficult for LLMs.


yeah but a lot of those failures fail because of underlying architecture issues. this would be like a bee saying "ha ha a human is not intelligent" because a human would fail to perceive uv patterns on plant petals.


The letter-counting, possibly could be excused on this ground. But not the other instances.


That's just not true. Star Trek Data was understood in the 90s to be a good science fiction example of what an AGI (known as Strong AI back then) could do. HAL was even older one. Then Skynet with it's army of terminators. The thing they all had common was the ability to manipulate the world as well or better than humans.

The holodeck also existed as a well known science fiction example, and people did not consider the holodeck computer to be a good example of AGI despite how good it was at generating 3D worlds for the Star Trek crew.


i think it would be hard to argue that chatgpt is not at least enterprise-computer (TNG) level intelligent.


I was around in 1995 and have always thought of AGI as matching human intelligence in all areas. ChatGPT doesn't do that.


Many human beings don’t match “human intelligence” in all areas. I think any definition of AGI has to be a test that 95% of humans pass (or you admit your definition is biased and isn’t based on an objective standard).


Can you be more specific? Which politician is using this fund in a corrupt fashion, to help which friend? Please provide names.

Or are you simply expressing the same meaningless general cynicism that is so predictably and boringly parroted whenever any government tries to do anything?


If you assume that the previous poster is right and do a casual web search for corruption with EU fudns, will you find anything?

It is so prevalent that your question makes as much sense as demanding the names of soldiers killed in the Ukraine war to believe the war is real.


Only names?

Is it even valid unless it's full names, dates of birth, addresses, bank account numbers, receipts and notarized video evidence where they explicitly admit to being corrupt?


> Can you be more specific?

See the last wave of arrests in Brussels. (some defence lobby - they wouldn't touch the politicians).


Sounds like a very strong incentive for us to find a way to leave the planet and settle elsewhere.


And those settlements will be not be full of exactly the same human beings?


nah jesus says that's a sin we can only stay here with our heavenly chosen barons or go to heaven or hell


Just to clarify, it’s good for cryptocurrency when a portion of that finite currency is irreversibly removed from circulation forever?

In that case I assume it’s best if all ETH were to cease to exist completely. I can’t really argue against that.


There a wide spectrum between "some removed from circulation" vs "the entire thing ceasing to exist". think of this as a 400k token burn.


As long as it’s not your eth though. In that way, I guess yes it is good if you’re holding onto a ton of Furbies and there’s a massive Furby culling elsewhere.


For physical materials, like gold, this applies because gold is useful other than as a medium of exchange.

But ETH is a mathematical construct. It should be true in the limit up until the very last measurable quantum of ETH is erased, and the very last bit holds the entire value of the cryptocurrency.

To put it another way when does “deleting eth is good for eth” cease to be a valid argument?


When I really think about ETH and well others. I question is there really some market cap. Or is the value actually more so the coins in active circulation, with some lowering factor for big reserves. That is that market cap is in sense a lie, and amount of circulating tokens is what really indicate the value. As thus removing tokens from circulation actually would remove value. Just that calculating market cap is really hard. Unlike say redeemable stocks for say ETF.


the founder of Bitcoin literally said lost btc is a gift for everyone else. there is likely some optimal about of burning that increases value of other tokens vs. destroys the network.


Japan has said AI can train on copyrighted materials.

https://www.privacyworld.blog/2024/03/japans-new-draft-guide...

I imagine if copyright is a big issue for AI, Japanese startups will have an advantage.


Does China need to say anything or can you guess their policy?


> no sane nation ever will buy a US weapon

I don’t think the US defense industry is worried. Do you know any sane nations around here?

In general it’s not like war is about who’s right or moral. It’s more about who can bring more and better weapons to the fight.


Canada would not buy anymore F-35. Their replacement parts can be shutdown any moment as the current US President stated publicly at least six times, they should become a US state.

A change of administration in the next few years, will not mitigate the taboo that has been broken.

The US sells 120 billion dollars a year of weapons: https://www.state.gov/fiscal-year-2024-u-s-arms-transfers-an...


Baseless catastrophising. Turn off the news a bit.


> In general it’s not like war is about who’s right or moral. It’s more about who can bring more and better weapons to the fight.

I suspect, though I'm not sure, that the point was more that a nation that rejects science in favor of a different kind of ideological purity (lack of virtue signaling as its own kind of virtue signaling) is a nation whose weapons are no longer likely to be the better ones.


I think the point is that the US is no longer a reliable ally, so people will no longer want to buy weapons systems for which the US might withdraw support at any moment.


That seems to be a premature assessment. Trump paused a weapons shipment to Ukraine, then allowed to it to continue. No other shipments have been held up at all. Our allies continue to receive the output of the American arms industry just like they did six months ago.

In fact, the latest news is that Trump is expanding our circle of friends when it comes to advanced weapons, offering India the opportunity to purchase F-35s.


It's a long-term shift. European militaries are built on the assumption of US support via NATO in the event of an attack on a member state. That assumption is looking very shaky now, and it will affect future planning.


They are built on the assumption will subsidise their defence almost entirely. They are rapidly realising that the US defending them will be conditional on them defending themselves. This is not a serious threat, it is just pressure. But it has resulted in increased investment in defence in Europe which is really important for global security.


>They are built on the assumption will subsidise their defence almost entirely.

This is an exaggeration. The larger European military powers spent enough on defence to be able to operate without US assistance in a range of circumstances (e.g. the Falklands war). But Europe did not anticipate having to defend itself against Soviet aggression without US assistance.

In fairness, it would have seemed crazy, not so long ago, to think that the POTUS would be on friendlier terms with the leader of a Russian dictatorship than with the democratically elected leaders of France and Germany. But here we are.


Key word - "spent".

The Falklands conflict was almost half a century ago. It is doubtful that the UK could respond today like they did in 1982.

The video "The Navy With More Admirals Than Warships"[1] addresses that exact scenario, and talks about the current woeful state of the British Navy today.

You are right to question who your allies are.

[1]: https://www.youtube.com/watch?v=po9duwvipB0


I think that’s somewhat unclear. In principle, the new carriers with F-35s should have a much bigger advantage over the Argentinian navy and air force than the tiny carriers we sent to the Falkland’s with Harrier jump jets. But of course it is impossible to know until such a thing actually happens. (Even in the case of a far more antecedently plausible conflict such as Russia/Ukraine, expert predictions were all over the place, and mostly wrong.)


Given the rejection of science and extreme virtue signaling the US has had for the last 20 years, I’m a bit skeptical that a slight turning back from that would make any difference to our allies.


>> In general it’s not like war is about who’s right or moral. It’s more about who can bring more and better weapons to the fight.

"You have the watches. We have the time." There are many ways to win a war.


This is a general objection to AI responding to real world events in general : "What if something unexpected happens?" It comes up in self driving as well. Things like "What if something suddenly appears in the middle of the road" or "Can it drive in snow conditions with zero visibility?

My question is, how do you know that in general human beings respond better to unexpected or very complex / difficult situations than an automated system would? Yes, human beings can improvise, but automated systems can have reaction times more than an order of magnitude faster than that of even the quickest humans.

I'd like to see some statistics on the opposing hypothesis : How good are humans, really, when encountering unexpected situations? Do they compare better with automated systems in general?

Here's a competing hypothesis: An automated system can incorporate training data based on every recorded incident that has ever happened. Unless a situation is so unexpected that it has literally never happened in the history of aviation, an AI system can have an example of how to handle that scenario. Is it really true that the average human operator would beat this system in safety and reliability? How many humans know how to respond to every rare situation that has ever happened? It's at least possible that the AI does better on average.


In theory, everything works. In practice, we can't even master automated driving, on two dimensional streets with painted lanes, relatively slow speeds, and cars that can just stop in case a decision could not be made. If we can't make this happen, how do you expecct the same with higher speeds, an additional dimension, planes with radio-only (no additional telemetry) and pilots with heavy accents?


>Unless a situation is so unexpected that it has literally never happened in the history of aviation,

I would say this is actually the most likely scenario for an edge case. The sheer number of variables make it unlikely that the same unexpected event would happen twice.

In an emergency situation the combination of, the emergency, ground conditions, weather, visibility, instrumentation functionality, and surrounding aircraft is most frequently going to be unique.


> I'd like to see some statistics on the opposing hypothesis : How good are humans, really, when encountering unexpected situations? Do they compare better with automated systems in general?

This is already out there. You can go research how Airbus and their automation works in practice.

You can also listen to air traffic control recordings to get an idea of what types of emergencies exist and how often they happen. I'm sure the FAA has records you can look at. :)

Now that apply that to something 3 orders of magnitude more complex.


Fwiw I agree with you.

I feel that in general people obsess over assigning blame to the detriment of actually correcting the situation.

Take the example of punishing crimes. If we don’t punish theft, we’ll get more theft right? But what do you do when you have harsh penalties for crime, but crime keeps happening? Do you accept crime as immutable, or actually begin to address root causes to try to reduce crime systemically?

Punishment is only one tool in a toolbox for correcting bad behavior. I am dismayed that people are fearful enough of the loss of this single tool as to want to architect our entire society around making sure it is available.

With AI we have a chance to chart a different course. If a machine makes a mistake, the priority can and should be fixing the error in that machine so the same mistake can never happen again. In this way, fixing an AI can be more reliable than trying to punish human beings ever could.


I'm always hesitant to enter "punishing crimes" discussions on this one. Those, by definition, establish intent to commit the crime in a majority of cases. As such, they would almost certainly hit some "accountability" even if they were in a company. Heck, even qualified immunity for government actors typically falls on that.

That said, I do think we are in alignment here. Punitive actions are but a tool. I don't think it should be tossed out. But I also suspect it is one of the lesser effective tools we have.


Can you explain why? That doesn’t seem like sound reasoning to me.

If you believe the reason is corruption, what personal incentive would the courts have to rule this way? Judges can easily be the victim of government overreach as well.


And having a log of every conversation and keystroke that any outspoken judge has ever made gives you all sorts of ways to align their opinions with the above all importance of “National Security”


They were being sarcastic with no reasoning or explanation and there's no logical reason it ought to be the top comment in this thread.


Data is wealth and power, and today's Supreme Court will always side with wealth and power.


I'm not sure what this means. The Supreme Court will side with data because it is wealth and power? What side is an inanimate object on?


Data in this case is on the side of wealth and power. You're also wrong about it being an inanimate object for two reasons. Reason A: Data is not an object but a pattern of objects or attributes of object(s) wherein the arrangement forms symbols according to a standard or protocol. Reason B: Data is quite animated when moving e.g. in response to searches or otherwise (re)-transmitted.


We have more than one wealthy and powerful person. They're rarely all on the same side.

So regardless of how courts rule on most issues, one could almost always argue they're taking the side of wealth and power and against the side of wealth and power.

And data is inanimate in the dictionary definition sense of it not being alive. That other things can move it doesn't make it animate.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: