>>Most scientists aren’t trying to mislead anyone, but because they face strong incentives to present favorable results, there’s still a risk that you’ll be misled.
>>We also found evidence, once again, that researchers tend not to report negative results, an effect known as reporting bias.
>>But unfortunately, the scientific literature is not a reliable source for evaluating the success of AI in science.
>> One issue is survivorship bias. Because AI research, in the words of one researcher, has “nearly complete non-publication of negative results,” we usually only see the successes of AI in science and not the failures. But without negative results, our attempts to evaluate the impacts of AI in science typically get distorted.
While these biases will absolutely create overconfidence and wasted effort, the fact that there are rapid advances with some clear successes such as protein folding, drug discovery, &weather forecasting, leads me to expect there will be more very significant advances, in no small part because of the massive investment in funds and time to the problem of making AI-based advances.
For exactly the reasons this researcher spent his time and funds to research this, despite his negative results, there was learning, and the effect of millions of people effectively searching & developing will result in more great good advances being found and/or built.
Whether they are worth the total financial & human capital being spent is another question, but I'm expecting that to be also positive
>>We also found evidence, once again, that researchers tend not to report negative results, an effect known as reporting bias.
>>But unfortunately, the scientific literature is not a reliable source for evaluating the success of AI in science.
>> One issue is survivorship bias. Because AI research, in the words of one researcher, has “nearly complete non-publication of negative results,” we usually only see the successes of AI in science and not the failures. But without negative results, our attempts to evaluate the impacts of AI in science typically get distorted.
While these biases will absolutely create overconfidence and wasted effort, the fact that there are rapid advances with some clear successes such as protein folding, drug discovery, &weather forecasting, leads me to expect there will be more very significant advances, in no small part because of the massive investment in funds and time to the problem of making AI-based advances.
For exactly the reasons this researcher spent his time and funds to research this, despite his negative results, there was learning, and the effect of millions of people effectively searching & developing will result in more great good advances being found and/or built.
Whether they are worth the total financial & human capital being spent is another question, but I'm expecting that to be also positive