Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We never pitched him on the Singularity Institute (http://singularity.org/research/), but I doubt he's ever been seriously pitched on anything more ambitious than "build a recursively self-improving Artificial Intelligence with a stable, specifiable goal system so that it can improve into a superintelligence and do world optimization." If he's been pitched on anything more frightening than that, I'd really like to know what.


Honest question: Does SI actually do anything other than hold conferences and take donations so smart people don't have to get real jobs?


A cynic would say that the main activity of SI's Research Fellows is to type up their Science Fiction and Philosophy musings and upload them as PDFs to their website. Someone less cynical would say that these musings are fundamental research in the area of self-replicating AIs that will enable other people to build said AIs in the future.

I tend to lean toward the cynical interpretation.


But the appropriate area for fundamental research is in explaining consciousness and intelligence so that in the future someone trying to build an AI can actually define what they are trying to build.

That kind of science takes large scale organization and specialization by each person in a narrow field. It carries enormous organizational costs. It's also frequently quite a lot of boring work.

Easier to just speculate on a wide area of interesting topics while you wait for people to do the real work.

Looks like I leaned so far towards the cynical interpretation that I fell over on it.


My hope is that if the spun-off Center for Applied Rationality takes off, we can all get out of the business of "Trying to explain to people why this an urgent problem that needs funding" and get back to "Actually researching the problem." It does take more than 2 people to actually research the problem (we know, we tried it with two people) and up until just recently there wasn't a very obvious path to how to find and fund 3 or more. Of course we've also produced various think-tank-ish analytic papers along that trajectory, whose value is not zero, but actual serious end-of-the-world basic-math research has always required more funding and more attention than we had.

All I can say is - it's starting to look like it did work (the "spend years convincing people to fund it" path) and it's not at all obvious even in retrospect what else we could've done.


If thats an honest question: http://singularity.org/what-we-do/

There exists an axis which all institutions lie upon where (i) those that do direct business and support short-term incremental innovations exist on the left hand side, and (ii) abstract research institutes supporting long-term paradigm shifts of innovation exist on the right had side.

Both serve the same ultimate purpose.

sorry for grammar i'm tired.


Totally agree. Ambitious is not the same as pompous, people forget that.

For example: We are building a social platform that will change the way the upper class communicates with a chat-site exclusively for the rich as We are building technology that gets removes the need for cars and allows people to get around faster and safer.


I think "build a recursively self-improving Artificial Intelligence WITHOUT a stable, specifiable goal system so that it can improve into a superintelligence and do world optimization" is more frightening, even if less ambitious :)


After PG's response, chance are you'd be filed under "unconvincingly pitched frighteningly ambitious ideas".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: