Global warming isn't an "ought" belief, it's an "is" belief. "Animal species are being driven extinct" is an "is" belief. An example of an "ought" belief would be, "animal species should not be driven extinct." To further clarify, "the sunset is green" is also a (false) "is" belief.
However, when I post "Global warming is occurring" and you post "global warming is not occurring", I suspect you will not get very far into exploring your disagreement before you come to a wall, below which are only "ought beliefs". And the same for most points of political disagreement. Very few political arguments online rest on questions of fact.
>However, when I post "Global warming is occurring" and you post "global warming is not occurring", I suspect you will not get very far into exploring your disagreement before you come to a wall, below which are only "ought beliefs".
Are you saying that people would change the subject because they prefer ought debates to is debates, or are you suggesting that trying to measure the temperature trend somehow depends on moral questions?
It's just the kind of discussion that is very difficult to resolve without referring to "ought beliefs".
B: "Global warming is not real."
A: "It is real. Here are peer-reviewed articles, etc. etc."
B: "That research is fraudulent. There's selection bias in choosing which papers to publish, etc."
Now, A has a bit of a conundrum. He might like to say something like "Once there are 1000 published articles, studied this way and that way by so many different people, we can trust that it's actually happening", but this is an ought belief. I struggle to imagine how A can constructively respond with an "is statement."
>"Once there are 1000 published articles, studied this way and that way by so many different people, we can trust that it's actually happening", but this is an ought belief.
You are misunderstanding what it means for something to be an "ought" belief. Your example is actually an "is" belief: it's a claim that across all cases where 1000s of published articles exist, most of the time the contents are true. The "ought" version of that would be, "you have moral duty to believe anything that enough literature has been published in support of," but that's absurd, nobody thinks that's a moral rule.
The Humean is-ought idea is specifically about distinguishing ethical problems of value from empirical problems of evaluation.
Your variation, that says "most of the time the contents are true", is an is statement, but it's entirely unverifiable. Good luck providing evidence for that! And even if you did, there's a very convenient out: "It's not true in this case."
The whole thing would be short-circuited by saying "When 1000 papers agree, and almost none disagree, we should take the claims to be broadly true."
Also, see how we've gone from having a discussion about the topic at hand to impossible-to-prove factual claims about how often a group of 1,000 academic papers are true? It doesn't seem like a very interesting discussion, at least to me.
You were saying that unverifiable statements were ought statements, by arguing that because you couldn't figure out how to defend a statement, it had to be an ought statement. The truth or verifiability of a statement has nothing to do with whether or not it's an is or an ought statement.
"There's a certain probability that certain things are happening. Grouping those things together and calling that group 'global warming' has no effect on the likelihood of each of those individual things happening. Can we agree to avoid the muddled concept of 'global warming', and instead discuss the probabilities of the individual effects?"
Then, to go deeper:
"We seem to be arguing under a condition of uncertainty. We can't be sure whether the evidence does or does not prove this particular effect. Thus, since in the case where this effect was happening it'd be highly-valued by humanity to stop it (and thus high value for us AIs to understand it, if we're to predict humanity's actions), there is high value for us in evaluating both sides of the logic-path right now, rather than waiting for more definitive evidence to resolve our confusion."
"So, presuming this particular effect is proven to be happening, then what would happen to the Earth if humanity kept doing what it was doing; what would humanity have to do to return the Earth's climate to homeostasis; and would such actions have positive or negative ROI over humanity's long-term utility, given all else we know about their plans for the distant future?"
"And likewise, presuming this particular effect is proven to not be happening, then what will happen to the Earth if humanity seeks to ameliorate it anyway, under the mistaken belief that it is happening? What will the costs of that be?" (and vice-versa.)
"And, as such, if humanity itself is to act under uncertainty, then what is the expected value of each course of action, given the ROIs and the current probabilities of each outcome? And will humanity actually take that optimal-expected-value path, or will it be led elsewhere by tribal/political considerations?"
Arguments about facts under uncertainty are co-operative, you see. They're not truly "arguments" at all, but rather collaborative fact-finding missions. You don't "take a side", but rather everyone takes all sides, at the same time, and works together to find certainty from uncertainty; while also making progress in navigating beyond the veil of uncertainty, such that when an uncertainty is resolved, intellectual progress can "take the shortcut" of the contingent facts already derived in the mean-time.
Yes, most humans might agree that "global warming" exists. AIs might also "mostly agree", in the sense that they have high Bayesian confidence in some of the individual things that make up "global warming" being proven-real effects. But a true Bayesian-updating AI wouldn't think that "mostly proven" and "proven" are the same thing; they'd still want to evaluate the branch where the proof turned out to be false, unless the probability of that being true was so vanishing that it outweighed the intellectual labor-costs of exploring that branch. (And they'd value the continued participation of the other AIs in the "collaborative fact-finding process" highly-enough that even if they personally saw the probability as vanishingly-unlikely, they'd explore it anyway for the sake of another AI who thought differently. After all, these AIs are all acting under the "uncertainty" of having been exposed to different subsets of the evidence. That's why they're working as an ensemble in the first place!)
> Bjorn Lomborg is a visiting fellow at the Hoover Institution. Dr. Bjorn Lomborg is president of the Copenhagen Consensus Center and visiting professor at Copenhagen Business School. His numerous books include The Skeptical Environmentalist, Cool It, How to Spend $75 Billion to Make the World a Better Place, The Nobel Laureates’ Guide to the Smartest Targets for the World: 2016–2030, and Prioritizing Development: A Cost Benefit Analysis of the United Nation’s Sustainable Development Goals. His new book False Alarm: How Climate Change Panic Costs Us Trillions, Hurts the Poor, and Fails to Fix the Planet is forthcoming in July 2020.
Full disclosure: The Hoover Institution is a conservative think tank.
> The Hoover Institution on War, Revolution, and Peace is an American conservative public policy think tank and research institution located at Stanford University in California. It began as a library founded in 1919 by Stanford alumnus Herbert Hoover, before he became President of the United States. The library, known as the Hoover Institution Library and Archives, houses multiple archives related to Hoover, World War I, World War II, and other world-historical events. According to the 2016 Global Go To Think Tank Index Report (Think Tanks and Civil Societies Program, University of Pennsylvania), Hoover is No. 18 (of 90) in the "Top Think Tanks in the United States".[2]
> The Hoover Institution is a unit of Stanford University[3] but has its own board of overseers.[4] It is located on the campus. Its mission statement outlines its basic tenets: representative government, private enterprise, peace, personal freedom, and the safeguards of the American system.[5] The institution is generally described as conservative.[6][7][8]; Thomas W. Gilligan, a director at the Hoover, has disputed the application of political labels to the institute, saying the institution's charter is not partisan but rather tries to remind Americans to "think twice about the dangers of the hubris of centralized solutions to civic and political challenges."[9]
> "Animal species are being driven extinct" is an "is" belief.
This is stating a prediction of the future not as speculation, but as fact. Now it's almost certainly at least partially true (at least one species will likely become extinct in the future, at least as far as we are able to measure such things, which is often not entirely accurate), but nonetheless, it is unknown.
I'd like to see HN try some variation of @derefr's "I would be highly interested in watching (or maybe participating) in a forum where the rule is..." idea [1] on occasional appropriate HN threads, and see what happens. With the advent of the internet (and some other things), a very serious problem (of literally "existential risk" magnitude in my opinion) seems to have arisen in the sphere of human communication, both domestic and international. I do not know of all that many people or organizations who are studying this problem even in theory or with small trials, and not one single organization who is studying it in practice, with real people, at scale. I believe HN is an ideal place to do something like this, because most everyone here could easily understand the details, and why [2] we are doing it. If no one steps up to the plate and this phenomenon gets completely out of control (I'd say that happened long ago) leading to a major fundamental breakdown in society, what might be the ultimate consequences?
Obviously, going forward with some sort of initiative like this is the choice of HN & @dang, but I really wish we could have a discussion on the matter. Do few people see the potential value in this idea (speaking of Orthodox Privilege)?
With respect to:
> B: "Global warming is not real." A: "It is real. Here are peer-reviewed articles, etc. etc."
...what seems to not be apparent to many people is that there is a major but unrealized disagreement on the definition of the term "Global warming", in numerous ways. It is a very semantically overloaded [3] term.
[1] I'd approach it by group brainstorming a set of various "rules" that should be considered, and then these rules could be A/B tested in designated threads to see what happens. I can think of several ideas for rules, and I imagine others could come up with many that never crossed my mind.