Hacker Newsnew | past | comments | ask | show | jobs | submit | evanevan's commentslogin

The important bit (which seems unclear from this article), is the exact relationship between the for-profit and the not for profit?

Before, profits were capped, with remainder going to the non-profit to distribute benefits equally across the world in event of agi / massive economic progress from agi. Which was nice, as at least on paper, a plan for an “exit to humanity”.

This reads to me like the new structure might offer uncapped returns to investors, with a small fraction reserved to benefit the wider public via this nonprofit. So dropping the “exit to humanity”, which seemed like a big part of OpenAI’s original vision.

Early on they did some good research on this too, thinking about the investor model, and its benefits for raising money and having accountability etc in todays world, vs what the right structure could be post SI, and taking that conversation pretty seriously. So it’s sad to see OpenAI seemingly drifting away from some of that body of work.


I didn’t consider that this might be their sly way to remove the 100x cap on returns. Lame.


Who should decide seems important. All of this is at the end of the day controlled by OpenAI right?

I wonder is there a way the AGI could be instructed to weight all of our preferences equally?

Seems vital so we end up with something in the interest of all of us and not something we just have to hope the few entities controlling AGI do the right thing, given historical precedent of the risks of entrusting systems that are supposed to benefit all of us to small groups too.


The concept you are describing is called Coherent Extrapolated Volition.

> In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.


Thanks - yeah I think CEV hits the mark. Would be awesome if OpenAI (and others) committed to CEV then (with I'd add the qualification, equal weighting on each person).

Would be curious how this would play out for LLMs too. What do people actually want? I would guess most people think OpenAI is being too restrictive on what ChatGPT does, but would be curious to actually see that play out and how people are thinking about it.


no entities control AGI because it doesn't exist.


Shortbets five years.

Even LLM weaknesses are just snafus. We've already "replaced" or made inroads at automating the jobs of artists, voice actors, copy editors, Go/chess/video game players, and more. I'm failing to communicate the breadth here.

Real actors, singers, manufacturing jobs, software developers, and a whole host of other types of tasks are shortly to come. As capital plunges in, there will be rapid maturation.

Each of these advances shows a comprehensive understanding of the complexity and nature of the task AI is applied to. If we keep piling these victories on, we'll have an approach for agency and consciousness in short order. Everyone is trying to solve this now, and we've made so much progress.

The pace of innovation is staggering and should shake the foundations of the models through which you understand and predict the world and the future. We're looking at a fundamentally different set of possible outcomes for 2030. It's almost impossible to predict, as our entire economic system may lurch forward in a next "industrial revolution".


Not yet... but one day soon, no? Seems good to be starting to plan


One thing you could do is have the camera hardware sign the raw image, then make a zero knowledge proof that says original photo was signed by real hardware, and then the only edits applied to it were simple things like cropping, changing colors etc

Seems reasonable to coordinate the updates with a blockchain (though not necessary for use cases where you trust the platform / coordinating entity).

Disclaimer I work at a zkSNARK / blockchain company :)


So like most solutions involving blockchains, no blockchain is actually required.


Hi, I'm one of the founders of Coda -

In Coda we use zero knowledge proofs to stand in for downloading / checking the blockchain. This means you get the identical computation to normal blockchain syncing, but in only the size of the zero knowledge proof and your account, which ends up being both constant and ~20kb. Which makes a big difference if you want to use Coda from a phone or browser.

Check out this video if you want to understand more of the tech behind it: https://www.youtube.com/watch?v=eWVGATxEB6M

And feel free to ask any other questions as well!


Whats the difference with vault?


Biggest difference is Coda's constant 20kb vs still at least a few hundred megabytes for Vault. This matters because it helps in making cryptocurrency and cryptocurrency apps usable by people from their phones and browsers, at 20kb with Coda that's actually possible.

I haven't looked particularly into if any consensus / security assumptions are different either with Vault, but that could be another place for them to get down to a few hundred MBs without zero knowledge proofs.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: