I think the basic premise is correct, that suffering is by far the most important determinant in where utilitarianism takes us in the long term. Adding the good has far fewer world-changing requirements and consequences than removing the bad, which might say something about the raw deal that evolution hands to all individual entities. A short sketch is here, the pertinent part reproduced below:
Suffering is not only human, however. The natural world from which we evolved continues to be as bloody, terrible, and rife with disease as it ever was. Higher animal species are certainly just as capable of experiencing anguish and pain as are we humans, and the same is true far further down into the lower orders of life than we'd like to think is the case. We ourselves are responsible for inflicting great suffering upon animals as we harvest them for protein - an industry that is now entirely unnecessary given the technologies that exist today. We do not need to farm animals to live: the engineering of agriculture has seen to that. The future of paradise engineering could, were we so minded, start very soon with an end to the farming and harvesting of animals. That would be followed by a growing control over all wild animal populations, starting with the lesser numbers of larger species, in order to provide them with same absolute control of health and aging that will emerge in human medicine. Taken to its conclusions, this also means stepping in to remove the normal course of predator-prey relationships, as well as manage population size by controlling births in the absence of aging, disease, and predation.
Removing suffering from the animal world is a project of massive scope, as where is the line drawn? At what point is a lower species determined to be a form of biological machinery without the capacity to suffer? Ants, perhaps? Even with ants as a dividing line, consider the types of technology required, and the level of effort to distribute the net of medicine and control across every living thing in every ecosystem. Or consider for a moment the level of technological intervention required to ensure a sea full of fish that do not prey upon one another, and that are all individually maintained in good health indefinitely, able to have fulfilling lives insofar as it is possible for fish. General artificial intelligences and robust molecular manufacturing technologies, creating self-replicating machinery to live alongside and inside every living individual in a vast network of oversight and enhancement might be the least of what is required.
At some point, and especially in the control of predators, the animal world will become so very managed that we will in essence be curating a park, creating animals for the sake of creating animals, simply because they existed in the past - the conservative impulse in human nature that sees us trying to turn back any number of tides in the changing world. It seems clear that the terrible and largely hidden suffering of the animal world must be addressed, but why should we follow this path of maintaining what is? What good comes from creating limited beings for our own amusement, when that same impulse could go towards creating intelligences with a capacity equal or greater than our own? Creating animals, lesser and limited entities that will be entirely dependent on us, to be used as little more than scenery, seems a form of evil in a world in which better can be done.
Given this, my suspicion is that when it comes to the animal kingdom, the distant future of paradise engineering will have much in common with the goals of past religious movements and today's environmentalist nihilists, those who preach ethical extinction as the best way to end suffering. Animals will slowly vanish, their patterns recorded, but no longer used. If animals are needed as a part of the world in order to make the human descendants of the era feel better, then that need can be filled through simulations, unfeeling machinery that plays the role well enough for our needs. The resources presently used by that part of a living biosphere will instead be directed to other projects.
Utopia. Our own bodies have more bacteria than cells in a constant state of symbiosis and infighting. It's seems like it's an optimization algorithm to keep life going. Suffering as a needed reinforcement feedback is deeply woven in the fabric of nature; without suffering living beings wouldn't survive a day. We can dream of something better, isolate ourselves, but that won't change the ongoing optimization algorithm that is deep inside us and nature.
Lots of words and we don’t even need to eat animals. We’d be better off breeding fruits to grow in a wider range of climates. Veggies are second best, but fruitarianism is lit. I’ve been 100% plant-based for 18mo and 80%+ fruit based for 6+, 100% with income allowing.
If you look at me now I’m skinny but that’s because I’m unable to earn income for calories of energy, so I’ve taken to fasting between infrequent meals. I still workout 30+ min daily with heavy dumbells and body-weight barbell squats+dead lifts. But as soon as I can gain employment I’ll jump on a bulking routine and document it... Jeff Berwick has done so, to a small and informal extent.
The mental benfits of plant-based eating are indescribable and immensely unprofitable for all but the self.
It is interesting to compare this interpretation of Aristotle on happiness with the transhumanist take on happiness, pleasure, and suffering known as the Hedonistic Imperative. When doing so, Aristotle starts to look a lot closer to stoicism, in sense of acceptance of the human condition as it is, planning a strategy within established limits that cannot be changed. The spectrum of philosophy expands, with most of the ancients sitting quite close to one another in a much larger state space than they had access to.
A lens to look at this through is the struggle for control over the population. The competing factions in the Middle Ages were very different from those now - so the manifestations of authoritarianism naturally differed in the Middle Ages. Large numbers of holy days (holidays) went hand in hand with simony and temporal power emanating from Rome. The church was just as rapacious and self-interested as the lords who claimed ownership over the peasantry.
So then, a fractious feudal nobility, the ruler, and the church, now fractious corporate powers and a more unified state, with the church faded to irrelevance in temporal matters. The only constant is the undiminished desire to order the lives of others in order to farm them for profit, to be the stationary bandit.
Even if matters were the same, however, the march of technology would still make the present a far better place to live than the past. It is technology, not politics, that is the greatest driver of quality of life.
This manages to get lost in its own trees. From a reductionist perspective:
- Intelligence greater than human is possible
- Intelligence is the operation of a machine; it can be reverse engineered
- Intelligence can be built
- Better intelligences will be better at improving the state of the art in building better, more cost-effective intelligences
Intelligence explosion on some timescale will result the moment you can emulate a human brain, coupled with a continued increase in processing power per unit cost. Massive parallelism to start with, followed by some process to produce smarter intelligences.
All arguments against this sound somewhat silly, as they have to refute one of the hard to refute points above. Do we live in a universe in which, somehow, we can't emulate humans in silico, or we can't advance any further towards the limits of computation, or N intelligences of capacity X when it comes to building better intelligences cannot reverse engineer and tinker themselves to build an intelligence of capacity X+1 at building better intelligences? All of these seem pretty unlikely on the face of it.
Part of the problem with trying to formalize this argument is that intelligence is woefully underdefinined. There are plenty of initially reasonable sounding definitions that don't necessarily lead to the ability to improve the state of the art w.r.t. 'better' intelligence.
For instance much of modern machine learning produces things that from a black box perspective are indistinguishable from an intelligent agent, however it'd be absurd to task Alpha-go to develop a better go playing bot.
There are plenty of scenarios that result in a non-intelligence explosion, i.e. the difficulty in creating the next generation increases faster than the gains in intelligence. Different components of intelligence are mutually incompatible: speed and quality are prototypical examples. There are points where assumptions must be made and backtracking is very costly. The space of different intelligences is non-concave and has many non-linearity, exploring it and communicating the results starts to hit the limits of speed of light.
I'm sure there are other potential limitations, they aren't hard to come up with.
Why isn't it possible (or likely, even) that the difficulty of constructing capacity X+1 grows faster than the +1 capacity? Self-improvement would slow exponentially when it takes three times the resources/computation/whatever to construct something that's twice as good at self-improving, for example.
You're arguing that it's not an exponential that doesn't continue to the right indefinitely.
But what if it follows a sigmoid instead, but the plateau is much higher than the current level?
This is what punctuated equilibria look like -- even if the 'new thing' isn't actually a singularity, it may be enormously disruptive and completely displace whatever came before.
> This is what punctuated equilibria look like -- even if the 'new thing' isn't actually a singularity, it may be enormously disruptive and completely displace whatever came before.
Right, and this is a much more plausible claim than one of "singularity".
>You're arguing that it's not an exponential that doesn't continue to the right indefinitely.
Kalminer's post neither assumes that nor claims it to be so.
The argument is not that an exponential growth is impossible, it is that it cannot be assumed from what we currently know, because we don't know how the problem of bootstrapping intelligence scales. In fact, the idea that exponential growth would occur over any period of time, let alone indefinitely, is speculative - which is not a claim that it is impossible.
I think the point he is trying to make is that there are boundaries to intelligence. I think of it this way - no matter how smart an AI is, it still would take 4.3 years to reach Alpha Centauri going at the speed of light. An AI still needs to run experiments, collect evidence, conjure hypothesis, reach consensus, etc.. Is this really that far more efficient than what humans do today?
But we, humans, aren't going at anything like the speed of light. What if we tweaked our DNA to produce human beings with the working memory capacity of 50 items instead of the normal 7-ish [1]? One such researcher would be able to work faster, on more problems at once, and to consider more evidence and facts. The next bottleneck for that person, of course, would be the input/output capacity (reading, writing, typing, communicating), but even with those limitations, I bet they would be a lot more efficient than the average "normal" human. The question is - would you call such a person more "intelligent"?
Or we get more humans and then it's a coordination problem right? I mean there is a point in comparing individual vs collective intelligence. This is a bit like communist systems. They work in theory because you get to plan the economy centrally, but in fact more chaotic systems (unplanned) do better (check growth of capitalist countries vs communist ones).
Sure there are boundaries. But the limit of these boundaries may be way above what humans are doing. Computers, unlike humans aren't limited to the domain of the physical. An AI may well be able to meaningfully organize (read: hack) all of the worlds computers because it can self-replicate, increase computing power, communicate very complex information very fast, etc. We're limited by the output of fingers and vocal chords, by the size of our brains, by imprecise and slow memory formation and recall, by the input we can get from mostly eyes and ears, computers aren't.
An AI may well be able to reach consensus on millions of hypotheses per second.
Is this really that far more efficient than what humans do today?
A major managerial problem with humans is sorting out our irrational emotional biases and keeping everyone working on something resembling the appointed task. Can you imagine the productivity gain if that problem suddenly went away?
Also, something that is very overlooked IMO is that the engineering process does not need to happen in silico, even though I am not a defendant of using it, bioengineering is a possibility.
Essentially, the question asked is just the setup to watch someone approach a novel experience. If the candidate is not in a natural environment, i.e. working with a comfortable IDE and with access to Stack Overflow, it is kind of pointless. You want to see what they do in a situation fairly close to the mundane daily example of "how do I do this new thing."
So don't ask standard related to business challenges that have answers online: find novel exercises. Write a sort function that sorts everything except for at least two items. Write a linked list node that sabotages the user in the most subtle way possible. And so on. Fortunately these are easy to create, as there is an infinite space of engineering exercises that no-one would carry out in reality as they are self-defeating, or pointlessly destructive, or otherwise non-productive. But they still serve to illustrate how someone thinks their way around a new concept, and that is the important thing.
I feel that those who argue that any approach other than running human brain emulations and then reverse engineering them or speculatively modifying them is the most likely way to get to AGI has a pretty steep hill to climb in order to justify that point of view.
Nothing else that is going on now or even on the agenda or even foreseeable offers a plausible, definitive plan to get to AGI. Whereas brain emulation is clearly going to achieve that goal fairly shortly after the maps are good enough and the computational capacity large enough, and the following experimentation is a far more reliable way to determine the underpinnings of intelligence than present efforts at de novo construction.
I disagree. It's too expensive to run a low level brain sim. In the meantime deep learning based AI achieved superhuman or close to human results in many tasks, such as image recognition, voice recognition, translation, car driving and Go.
The AGI will be a reinforcement learning agent, as it will need to be able to perceive and act in the physical world. Thus the path to AGI is the path of RL. The most essential piece in RL will be the development of environment simulators. AlphaGo was a trivial simulator - simple rules in a simple world - but we need real world simulators in order for the AI agents to learn to act. Fortunately simulation is almost the same as gaming and there is huge interest in it both for humans and AI, so it will be developed fast.
So instead of simulating the brain, simulate the world (imperfectly) and run deep neural net based RL to learn to act on top of it.
The brain has 10^14 synapses (100 trillion) synapses. Current day neural nets barely reach a hundred million, with very few exceptions. Then, besides compute, there is data movement - currently the bottleneck in AI is moving data around, not computing. Imagine the interconnect for a brain-size neural net.
I'm aiming for General Intelligence Augmentation, rather than AGI, but I it could be adapted.
I think the trick we are missing is always developing AI systems that need external programmers/maintainers. If we get away from that mindset, I think we will be more successful, even if it is not my particular vision.
Some humans are likely to become more powerful than other humans some day. Most such humans will by default develop instrumental subgoals that conflict with other human interests. This could have catastrophic consequences. If we don't actively work on control mechanisms and safety of human behavior, this will most likely pose an existential risk to humanity.
Compare and contrast.
I think people make too much of the wrong things in the matter of general artificial intelligence.
> Some humans are likely to become more powerful than other humans some day. Most such humans will by default develop instrumental subgoals that conflict with other human interests.
It has been true multiple times throughout history. Every time some negative feedback kicks in changing the rules of the game enough for a certain balance to exist. If anything, it seems this balance tends to improve every time -although it might be too early to tell. However it does seem to me that the difference in power among humans (no matter how power is defined) is never big enough to make "other human interests" irrelevant.
Indigenous peoples have always suffered this problem. The difference with superintelligent AI is that (unless the problems are addressed) it will make the global elite suffer, too.
Worrying about lack of variability in the human condition, as the author does here, seems like a strange fear in the face of the future ability to edit, copy, and amend the function of the mind. Once human minds are emulated, those capabilities follow, and emulation should happen fairly rapidly after the computing power to do it emerges, since there are already well established research establishments working on the predecessor simulations.
"The pace of progress today bumps up against the limits imposed by organization of efforts, in that it takes a few years for humans to digest new information, talk to one another about it, decide on a course of action, gather together a group, raise funds, and start working. There is no necessary reason for any of these parts of the process to take more than a few seconds, however: consider a world in which human minds run far faster, because they run on something other than biological neurons, because they run hundreds of distinct streams of consciousness simultaneously, and because they are augmented by forms of artificial intelligence that take on some of the cognitive load for task assignment and decision making.
"It is, frankly, hard to even speculate about the potential forms taken by society in such an environment. Technology clearly drives human organizational strategies and struggles, for all that the minds of prehistory, of Ur, and of our modern times are all the same. In the past, evolution of society was largely shaped by the ability to communicate over distances and by the size of the population. In the future it will be shaped to a far greater extent by the way in which intelligences think and feel, and the way in which their minds depart from the present standard for human nature. We struggle to model human action in the broadest sense of economic studies, and I suspect that this will be true for any society of minds, no matter how capable they are. The complexity of the group always exceeds the capabilities of any individual or research effort within that group. We can do little more than point out incentives and suggest trends that are likely to emerge from those incentives."
For my money, the greater fears lie in what factions of humanity and its descendants might choose to do about pleasure and suffering, when handed the controls over the box.
"Altering the operation of our brains to induce pleasure without the need to undertake as much work was a fairly early innovation - see alchohol, etc. The point of much of technological progress is to achieve better results with less effort. The logical end of that line is wireheading or a life science equivalent yet to be designed: an augmentation in the brain, a button that you push, and the system causes you to feel pleasure whenever you want. There are numerous other alternatives in the same technological genre that seem plausible, such as always-on happiness, regardless of circumstances. This sort of thing makes many people nervous, and, sadly, rarely for useful reasons. That said, I suspect that even the most self-controlled of individuals has sufficient self-doubt to be wary of the advent of implementations of wireheading that might be, say, a hundred times better, cheaper, and safer than today's most influential mood-altering drugs.
"To my eyes this is actually the less interesting and less consequential of the two sides of the hedonistic imperative. It is the elimination of suffering, not the gaining of pleasure, that, when taken to its conclusions, will lead to a world and a humanity changed so radically as to be near unrecognizable."
" a world in which human minds run far faster, because they run on something other than biological neurons, because they run hundreds of distinct streams of consciousness simultaneously, and because they are augmented by forms of artificial intelligence that take on some of the cognitive load for task assignment and decision making."
Isn't that what is happening right now with the internet?
"human minds" would then be social circles or any other groups of entities.
Powered by biological neurons, by regulations, by software, by hardware, by all of them together. Running "hundreds of distinct streams of consciousness simultaneously" inside the human participants' skulls and, lets define it broad, other kinds of 'consciousness' inside of software and hardware.
It's just a zoomed out view.
For myself, I suspect that raw 'happiness' is ultimately not sufficient, and that instead shares a common (and much more complex) cause with 'satisfaction', which will be much harder to hack because it's counterinductive (ie. the awareness that you're hacking your 'satisfaction' criterion is itself dissatisfying.)
Run it in AWS. They keep their IP ranges comparatively clean.
Other hosting services, yes, you may as well not bother. I haven't found another one yet that is reliable enough at keeping deliverability from their IP ranges in good enough order.
Also make sure you set up SPF and DKIM right from the outset.
I ran into problems with an IP address I had owned for 15 years. Clean IP's will help, but they don't solve the problem completely. The real nightmare are the emails that just go missing - they don't even end up in the spam folder. If you are running some sort of mailing list it doesn't really matter, but if you are sending important transactional emails the it really does.
Apart from SPF and DKIM also make sure you also set your reverse DNS name and also set up DMARC.
https://www.exratione.com/2016/06/the-hedonistic-imperative-...
Suffering is not only human, however. The natural world from which we evolved continues to be as bloody, terrible, and rife with disease as it ever was. Higher animal species are certainly just as capable of experiencing anguish and pain as are we humans, and the same is true far further down into the lower orders of life than we'd like to think is the case. We ourselves are responsible for inflicting great suffering upon animals as we harvest them for protein - an industry that is now entirely unnecessary given the technologies that exist today. We do not need to farm animals to live: the engineering of agriculture has seen to that. The future of paradise engineering could, were we so minded, start very soon with an end to the farming and harvesting of animals. That would be followed by a growing control over all wild animal populations, starting with the lesser numbers of larger species, in order to provide them with same absolute control of health and aging that will emerge in human medicine. Taken to its conclusions, this also means stepping in to remove the normal course of predator-prey relationships, as well as manage population size by controlling births in the absence of aging, disease, and predation.
Removing suffering from the animal world is a project of massive scope, as where is the line drawn? At what point is a lower species determined to be a form of biological machinery without the capacity to suffer? Ants, perhaps? Even with ants as a dividing line, consider the types of technology required, and the level of effort to distribute the net of medicine and control across every living thing in every ecosystem. Or consider for a moment the level of technological intervention required to ensure a sea full of fish that do not prey upon one another, and that are all individually maintained in good health indefinitely, able to have fulfilling lives insofar as it is possible for fish. General artificial intelligences and robust molecular manufacturing technologies, creating self-replicating machinery to live alongside and inside every living individual in a vast network of oversight and enhancement might be the least of what is required.
At some point, and especially in the control of predators, the animal world will become so very managed that we will in essence be curating a park, creating animals for the sake of creating animals, simply because they existed in the past - the conservative impulse in human nature that sees us trying to turn back any number of tides in the changing world. It seems clear that the terrible and largely hidden suffering of the animal world must be addressed, but why should we follow this path of maintaining what is? What good comes from creating limited beings for our own amusement, when that same impulse could go towards creating intelligences with a capacity equal or greater than our own? Creating animals, lesser and limited entities that will be entirely dependent on us, to be used as little more than scenery, seems a form of evil in a world in which better can be done.
Given this, my suspicion is that when it comes to the animal kingdom, the distant future of paradise engineering will have much in common with the goals of past religious movements and today's environmentalist nihilists, those who preach ethical extinction as the best way to end suffering. Animals will slowly vanish, their patterns recorded, but no longer used. If animals are needed as a part of the world in order to make the human descendants of the era feel better, then that need can be filled through simulations, unfeeling machinery that plays the role well enough for our needs. The resources presently used by that part of a living biosphere will instead be directed to other projects.