Hacker News new | past | comments | ask | show | jobs | submit login

Checking it shortly (I haven't seen the paper before) this seems to be a very good analysis of how results are reported specifically for medical imaging benchmarks.

As is often the case with statistics, selecting just a single number to report (whatever that number is) will hide a lot of different behaviours. Here, they show that just using the mean is a bad way to report data as the confidence intervals (reconstructed by the methods in the paper in most cases) show that the models can't really be distinguished based on their mean.




Hell, I was asked to use confidence interval as well as average values for by bs thesis when doing ml benchmarks and scientist publishing results in medical fields aren't doing it...

How can something like that happen? I mean, i had a supervisor tell me "add the confidence interval to the results as well", and explained me why. I guess that at nobody ever told them? Or they didn't care? Or it's just a honest mistake


Is it because it’s word-of-mouth and not written down in some NSF (or other organization) guidance? Thiss seems to be the issue


That might be, but couldn't a paper be asked to include that to be published? It looks like an important information




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: