Hacker News new | past | comments | ask | show | jobs | submit | ninetyninenine's comments login

They are changing it for publicity reasons. It has nothing to do with whether or not it works.

This whole thing is being done as a reaction to this video:

https://youtu.be/YeABJbvcJ_k?t=1540

The article completed skipped over this. This video was released literally a week ago and is completely mocking the FAA. Floppy disks are a big joke in this video.


Luck is a huge factor for many forms of success. It is also the least attributed factor.

Partly because it’s the least controllable factor.


I have a question. How come the mathematical modeling and simulations haven't yet yielded us the perfect design that will get things right?

How come we have to build it and test it to know if it works?

Do we lack a mathematical model?


Same question got asked of Bob Bussard when he visited Google to talk about his whiffle-ball design. It's not that we lack models, it's that they're effectively incomputable at the scale we'd need them to be.

In a fluid, effects are local: a particle can only directly effect what it is in direct contact with.

In a plasma, every particle interacts with every other. One definition of a plasma is that the motion is dominated by electromagnetic effects rather than thermodynamic: by definition, if you have a plasma, the range of interactions isn't bounded by proximity.

This doesn't apply quite so much to (e.g.) laser ignition plasmas, partly because they're comparatively tiny, and partly because the timescales you're interested in are very short. So they do get simulated.

But bulk simulations the size of a practical reactor are simply impractical.


Putting a bunch of much more viscous radioactive material within proximity of each other is simpler than squishing and maintaining confinement of plasma under extreme conditions.

Fission reactors are relatively "easier" to simulate as giant finite element analysis Monte Carlo simulations with roughly voxels of space, i.e., thermal conductivity, heat capacity, etc. I happened to have been involved with one that was 50+ years old that worked just fine because of all of physicists and engineers who carefully crafted model data and code to reflect what would be likely to happen in reality when testing new, conventional reactor designs.

The problems with fusion are many orders-of-magnitude more involved and complex with wear, losses, and "fickleness" compared to fission.

Thus, experimental physics meeting engineering and manufacturing in new domains is expensive and hard work.

Maybe in 200 years there will be a open source, 3D-printable fusion reactor. :D


The difficulty is in the details. Small differences lead to bigger differences, like in chaos theory [0] What if the model says this coil needs to be 23.1212722 centimeter? Or two coils need to be 37.1441129 centimeters apart? How do you build that? Mathematics is always much more precise than engineering.

[0] https://en.wikipedia.org/wiki/Edward_Norton_Lorenz#Chaos_the...


You need to think of what happens if it's 0.001 cm too big, small, etc. Manufacturing always involves errors and engineering requires tolerances.

Doesn’t our engineering account for that? We never build things that need that level of detail in our tolerances. Like for example an airplane.

OP "How come the mathematical modeling and simulations haven't yet yielded us the perfect design that will get things right?"

Right how come we don't have models that allows to simulate designs with bigger tolerances? That's the perfect design.

We still have wind tunnels and aerodynamics is a pretty simple problem compared to fusion.

The epiphany I had with haskell was understanding why it was so modular. It goes beyond just understanding haskell though and towards understanding the fundamental nature of modularity.

Think about it. What is the smallest most primitive computation that is modular?

If you have a method that mutates things, then that method is tied to the thing it mutates and everything else that mutates it. You can't move a setter away from the entity it mutates without moving the entity with it. It's not modular.

What about an IO function? Well for an IO function to work it must be tied to the IO. Is it a SQL database? A chat client? If you want to move the IO function you have to move the IO it is intrinsically tied with.

So what is the fundamental unit of computation? The pure function. A function that takes an input and returns an output deterministically. You can move that thing anywhere and it is also the smallest computation possible in a computer.

The reason why haskell is so modular is it forces you to segregate away IO and purity. The type system automatically forces you to make the code using the most modular primitives possible. It's almost like how rust forces you to code in a way that prevents certain errors.

Your code in haskell will be made entirely of "pure" functions with things touching IO being segregated away via the IO monad. There is no greater form of modularity and decoupling.

In fact any other computing primitive you use in your program will inevitably be LESS modular then a pure function. That's why when you're programming with OOP or imperative or any other style of programming you'll always hit more points where modularity is broken then you would say haskell. That is not to say perfect modularity is the end goal, but if it was the path there is functional programming.


The way I like to picture the IO monad is you don't have direct access to the values coming from IO. You can only give the monad some functions which it applies to the values. Among these functions there are some that tell the monad to send values back out (print to screen, send request, etc.). Basically, all these functions make your program.

Whats wrong with bikini models? I want the product and the bikini model.

I feel rust promotes functional programming. I created a parser that would change its own state on an advance but the mutability and borrowing kind of made it hard to do it that way so I changed it so that the parser was stateless and had to return an index rather then adjust an internal index. Is it common for people to hit issues like this where the traditional pattern just doesn’t work and you have to do it a completely different way for rust?

I've experienced this. I would say it depends a lot on the complexity at hand, so for instance if it's something rather simple, you can keep track and avoid pitfalls even when doing it with a lot of mutation and imperative style. On the other hand, when complexity grows, I find myself going more functional and avoiding mutation at all cost, as obviously it avoids a lot of problems.

I would say Rust's borrow checker and lifetimes make it harder to use traditional patterns and favor a more functional approach. Obviously, sometimes doing it the functional way might be hard, for people not so used to it, but it keeps the compiler happier I would say.


No it’s a realistic possibility. Not just marketing.

It may not replace us and it also may. Given the progress of the last decade in AI it’s not far off to say that we will come up with something in the next decade.

I hope nothing will come but it’s unrealistic to say something definitively will not replace us.


You can’t project trends out endlessly. If you could, FB would have 20B users right now based on early growth (just a random guess, you get the point). The planet would have 15B people on it based on growth rate up until the 90s. Google would be bigger than the world GDP. Etc.

One of the more bullish AI people has said the models performance scales with log of compute (Sam Altman). Do you know how hard it will be to move that number? We are already well into diminishing returns with current methodologies and there is no one pointing the way to a break through that will get us to expert level performance. RLHF is underinvested in currently but will likely be the path to get us from Junior contributor to Mid in specific domains, but that still leaves a lot of room for humanity.

The most likely reason for my PoV to be wrong is that AI labs are investing a lot of training time into programming, hoping the model can self improve. I’m willing to believe that will have some payoffs in terms of cheaper, faster models and perhaps some improvements in scaling for RLHF (a huge priority for research IMO). Unsupervised RL would also be interesting, albeit with alignment concerns.

What I find unlikely with current models is that they will show truly innovative thinking, as opposed to the remixed ideas presented as “intelligence” today.

Finally, I am absolutely convinced today’s AI is already powerful enough to affect every business on the planet (yes even the plumbers). I just don’t believe they will replace us wholesale.


>You can’t project trends out endlessly.

But this is not just an endless projection. In one sense we can't have economic growth and energy consumption go endlessly as that will eat up all the available resources on earth, there is a physical hard line.

However for AI this is not the case. There is literally an example of human level intelligence exiting in the real world. You're it. We know we haven't even scratched the limit.

It can be done because an example of the finished product is humanity itself. The question is do we have the capability to do it? And for this we don't know. Given the trend and the fact that a Finished product Already exists, It is Totally realistic to say AI will replace our jobs.


There's no evidence we're even on the right track to have human level intelligence so no, I don't think it's realistic to say that

Counterpoint: our brains use about 20 watts of power. How much does AI use again? Does this not suggest that it's absolutely nothing like what our brains do?


There is evidence we're on the right track. Are you blind? The evidence is not definitive, but it's evidence that makes it a possibility.

Evidence: ChatGPT and all LLMs.

You cannot realistically say that this isn't evidence. Neither of these things guarantees that AI will take over our jobs but they are datapoints that lend credence to the possibility that it will.

On the other side of the coin, it is utterly unrealistic to say that AI will never take over our jobs when there is Also no definitive evidence on this front.


> unrealistic to say that AI will never take over our jobs

That's not my position. I'm agnostic. I have no idea where it'll end up but there's no reason to have a strong belief either way

The comment you originally replied to is I think the sanest thing in here. You can't just project out endlessly unless you have a technological basis for it. The current methodologies are getting into diminishing returns and we'll need another breakthrough to push it much further

This is turning into religious debate


Then we're in agreement. It's clearly not a religous debate, you're just mischaracterizing it that way.

The original comment I replied to is categorically wrong. It's not sane at all when it's rationally and factually not true. We are not projecting endlessly. We are hitting a 1 year mark of a bumpy upward trendline that's been going for over 15 years. This 1 year mark is characterized by a bump of a slight diminishing return of LLM technology that's being over exaggerated as an absolute limit of AI.

Clearly we've had all kinds of models developed in the last 15 years so one blip is not evidence of anything.

Again we already have a datapoint here. You are a human brain, we know that an intelligence up to human intelligence can be physically realized because the human brain is ALREADY a physical realization. It is not insane to draw a projection in that direction and it is certainly not an endless growth trendline. That's false.

Given the information we have you gave it an "agnostic" outlook which is 50 50. If you asked me 10 years ago whether we would hit agi or not I would've given it a 5 percent chance, and now both of us are at 50 50. So your stance actually contradicts the "sane" statement you stated you agree with.

We are not projecting to infinite growth and you disagree with that because in your own statement you believe there is a 50 percent possibility we will hit agi.


Agnostic, at least as I was using it, was intending to mean 'who knows'. That's very different from a 50% possibility

"You are a human brain, we know that an intelligence up to human intelligence can be physically realized" - not evidence that LLMs will lead to AGI

"trendline that's been going for over 15 years" - not evidence LLMs will continue to AGI, even more so now given we're running into the limits of scaling it

AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades

The only evidence that justifies a specific probability is going to be technical explanations of how LLMs are going to scale to AGI. No one has that

1. LLMs are good at specific, well defined tasks with clear outcomes. The thing that got them there is hitting its limit

2. ???

3. AGI

What's the 2?

It matters.. because everyone's hyped up and saying we're all going to be replaced but they can't fill in the 2. It's a religious debate because it's blind faith without evidence


>Agnostic, at least as I was using it, was intending to mean 'who knows'. That's very different from a 50% possibility

I take "don't know" to mean the outcome is 50/50 either way because that's the default probability of "don't know"

> not evidence LLMs will continue to AGI, even more so now given we're running into the limits of scaling it

Never said it was. The human brain is evidence of what can be physically realized and that is compelling evidence that it can be built by us. It's not definitive evidence but it's compelling evidence. Fusion is less compelling because we don't have any evidence of it existing on earth.

>AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades

>AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades

AI winter refers to a singular event that happened through the entire history of AI. It is not a term applicable to a common occurrence as you seem to imply. We had one winter, and that is not enough to establish a pattern that it is going to repeat.

>1. LLMs are good at specific, well defined tasks with clear outcomes. The thing that got them there is hitting its limit

What's the thing that got them there? Training data?

>It matters.. because everyone's hyped up and saying we're all going to be replaced but they can't fill in the 2. It's a religious debate because it's blind faith without evidence

The hype is in the other direction. On HN everyone is overwhelmingly against AI and making claims that it will never happen. Also artists are already replaced. I worked at a company where artists did in fact get replaced by AI.


My hope is that the AI never improves to take over my job and the next generation of programmers is so used to learning programming with AI that they become mirrors of hallucinating AI. That eliminates ageism and AI taking my job.

Realistically though I think AI will come to a point where it can take over my job. But if not, this is my hope.


Maybe we can learn some lessons from digital artists who naturally fret over the use of their skills and how they will be replaced by stable diffusion and friends.

In one way, yes, this massively shifts power into the hands of the less skilled. On the other hand if you’d need some proper and I mean proper marketing materials, who are you going to hire? A professional artist using AI or some dipshit with AI?

There will be slop of course but after a while everyone has slop and the only differentiating factor will be quality or at least some gate-kept arbitrary level of complexity. Like how rich people want fancy hand made stuff.

Edit: my point is mainly that the level will rise to a point that you’d need to be scientist to create a - then - fancy app again. You see this with web. It was easy, we made it ridiculous and I mean ridiculously complicated where you need to study computer science to debug React rendering for your marketing pamphlet.


Im tired of using AI in cloud services. I want user friendly locally owned AI hardware.

Right now nothing is consumer friendly. I can’t get a packaged deal of some locally running ChatGPT quality UI or voice command system in an all in one package. Like what Macs did for PCs I want the same for AI.


Your local computer is not powerful enough, and that's why you must welcome those brand new mainframes... I mean, "cloud services."

It is funny how using a Web IDE, and a cloud shell, is such a déjà vu from when I used to do development on a common UNIX server shared by the whole team.

Telnet from a Wyse terminal.

My first experience with such a setup was connecting to DG/UX, via the terminal application on Windows for Workgroups, or some thin client terminals in a mix of green or ambar phosphor, spread around the campus.

The only time I used a Pascal compiler in ISO Pascal mode, it had the usual extensions inspired on UCSD, but we weren't allowed to use them on the assignments.


My local computer is not powerful enough to run training but it can certainly run an LLM. How do I know? Many other people and I have already done it. Deepseek for example can be run locally but it’s not a user friendly setup.

I want an Amazon echo agent running my home with a locally running LLM.


Oracle just announced they are spending $40 billion on GPU hardware. All cloud providers have an AI offering, and there are AI-specific cloud providers. I don't think retail is invited.

From the most unexpected place (but maybe expected if you believed they were paying attention)

Maxsun is releasing a a 48GB dual Intel Arc Pro B60 GPU. It's expected to cost ~$1000.

So for around $4k you should be able to build an 8 core 192GB local AI system, which would allow you to locally run some decent models.

This also assumes the community builds an intel workflow, but given how greedy Nvidia is with vram, it seems poised to be a hit.


The price of that system is unfortunately going to end up being a lot more than 4k, you'd need a CPU that has at least 64 lanes of PCIe. That's going to be either a Xeon W or a Threadripper CPU, with the motherboard RAM, etc you're probably looking at least another 2k.

Also kind of a nitpick, but I'd call that 8 GPU system, each BMG-G21 die has 20 Xe2 cores. Also even though it would be 4 PCIe cards it is probably best to think of it as 8 GPUs (that's how it will show up in stuff like pytorch), especially because their is no high-speed interconnect between the GPU dies colocated on the card. Also if you're going to do this make sure you get a motherboard with good PCIe bifurcation support.


I made something[0] last year to have something very consumer friendly. Unbox->connect->run. First iteration is purely to test out the concept and is pretty low power, currently working on a GPU version for bigger models and launching Q4 this year.

[0] https://persys.ai


Hoping the DGX Spark will deliver on this

It will not. 273GB/s memory bandwidth is not enough.

I love the go philosophy of errors being silent by default. It forces us developers to handle things gracefully. Additionally no stack traces along with the errors which is good because devs are too reliant on stack traces nowadays.

This sounds like it needs an /s.

The complaint of developers "being reliant on stack traces" tone sounds an awful lot like: "real programmers just use punch cards and ignore any tooling we've come up with in the last 70 years: just use your brain"


It’s not sarcasm. This is just my personal philosophy. I feel attacked.

I speak in hyperbole here. I only point out the giants whos shoulders we stand on left us fantastic tools to use.

There is something to the philosophy of keeping your interface 'dark' such that you learn to 'shine'. But that's something I ascribe to a personal development journey.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: