Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One reason blogs are of lower scientific quality -- anyone can make series of large and unresearched claims, cherry picking a few examples to claim a broad point.

Good journals do have peer review, and they wouldn't have accepted this.

Also, one big disadvantage of blogs -- depending on where you host it, they often don't last long, and point 4 (easy to edit) is a blessing and a curse, particularly if people edit things without telling you they did it.

Of course, blogs have their place, I keep one myself, it's great for breaking news, explaining things in greater depth, sharing and understanding. And journals could learn from blogs / arxiv (make it easier to make corrections, allow discussions).



Getting a research paper published sounds so meritorious, no matter how idiotic or trivial it is. Take for example, Princeton 'research' in 2014 that predicted that Facebook will lose 80% of users by 2017 [1]. The evidence? Google trends data which has near-zero correlation with DAUs of Facebook. The research not only got published, but also got a wider-audience after major media websites covered it.

Had it been a blog post, I don't think anyone would have taken more than a cursory look at the analysis. Yes, anyone can fit data points and reach a conclusion with blogs, but isn't academic research fraught with similar problems already?

[1]: https://www.theguardian.com/technology/2014/jan/22/facebook-...


You picked a bad example. The actual source article is published at... arxiv. From an academic perspective it is worth exactly as much as a blog.

Academic literature has many problems but it is way above the blogosphere in that regard. And media coverage is going to suck no matter where stuff is published...


> From an academic perspective it is worth exactly as much as a blog.

From the perspective of peer review, yes. But per google scholar that paper has been 66 cited times, including in many articles that are in at least somewhat reputable journals (published via Elsevier, ieee, Springer, etc) and apparently two books [0]. So I think it's pretty clear that being a "scientific paper from Princeton" was significant reputation wise regardless of whether or not it was peer reviewed.

[0] https://scholar.google.com/scholar?q=Epidemiological%20model...

I was also skeptical about the citations so I checked that two with freely available pdfs were actually this paper, they were

[1] https://pdfs.semanticscholar.org/4ebb/b654c59aff454ad97301e6...

[2] https://arxiv.org/pdf/1509.07805.pdf


Right, but it's not a scientific paper. It's a problem when people trust something just because it came out of Princeton, but the problem you're identifying isn't one with scientific papers (and the citations you've identified are places where they're using the Cannarella paper as an example of people being interested in applying epidemiology to social networks, not citing the research they did as accurate or important.)


The majority of the papers citing it that I checked (including the two you have linked yourself) makes the citation for the idea of the method (using epidemiology models in the context of social media), not for the results.


several of my blog posts have had high google scholar citation counts; doesn't make them scientific articles, though. still just my personal opinions.


> Had it been a blog post, I don't think anyone would have taken more than a cursory look at the analysis. Yes, anyone can fit data points and reach a conclusion with blogs, but isn't academic research fraught with similar problems already?

Some papers are cited not because they are well written, but simply because they covered a topic which might not have been covered by other researchers. Literature review is a fundamental part of a paper.


I have no idea why it's so meritorious. I know people who are published, and don't even know what the paper their name is on is about, both at the outset and twilight of academic careers.

I also know a guy who published preliminary results and musings on his blog, and was then refused publication because it was "unoriginal".

Journal editors seem to not be familiar with the subject matter either, given what does and doesn't get published - why only ever positive results?

Unless scientific publication changes completely science as we know it will die. It is already becoming vanity, egoism, orthodoxy. Why publish? So you can get better pay later in the private sector.


Here are some of my thoughts after a quick scan:

1. George Box has a very famous quote: "All models are wrong; some models are useful."

2. Even the model applies correctly for Myspace, it does not mean it is correct for Facebook. I would use a very common terminology - "survival bias/selective bias" for it. The only way to ensure the model is letting it through numerous amount of cases. There is not only one social network is active, can the author predict for them and for the old one? Here is the possible list: https://en.wikipedia.org/wiki/Social_networking_service

3. When they cited a paper, they are not supposed to agree with them. Some of paper even criticize each other. They may accept the concept of Facebook being cool down but not in couple of years (with different parameters). If you want to make sure they are all agree or not, please make a survey on this. Like I said, you (and probably me) are probably into survival bias.

4. There is a least one mistake here: "discussed in Section ?? [3],". So, was it well proof read?


You missed a point. A research paper ensures that the right methodology is followed to arrive at a result. It ensures a sufficient but not a necessary guarantee on the result itself. Besides, a research paper also seeks to inform under what conditions this particular result has been achieved. The peer review process - as flawed as it may be - ensures that at least 3 other people who understand the topic have gone over it and have given their vote of approval.


This seems a bit idealistic, or perhaps I'm too cynical. Reproduction rates on soft-science papers are worse than should be expected of a junior high chemistry class.

Peer review, looking at it from the outside as a layman, appears to just be an enormous rubber stamp. I cannot believe that academics have time to rigorously go over the mountains of pages of dreck that make up most research papers, and even read the whole thing thoroughly, much less double-check the numbers. Not when they have a stack of papers to review a foot high, and another two feet of grants to write, their own research to do, not to mention what little teaching they haven't farmed off to grad students.


>Good journals do have peer review, and they wouldn't have accepted this.

Most peer reviewers do not get to see the data in the article.


When I submitted a paper to AAAI, there were dire warnings that the paper would be rejected if I included links to any supplemental materials (such as code or data), because this would compromise blind review.

The conference software they were using had a sketchy form for uploading supplementary data, but it appeared that they expected it to be used for small data tables or something. It certainly wasn't going to be an option to upload 12 GB of data into their conference software; they mentioned nothing about code, particularly how to specify its dependencies and the computing environment it needed to run in; and it also certainly wasn't going to appear in any form that was convenient to review if I did.

How could you submit code to blind review anyway? Do you make alternate versions of all the dependencies that don't credit any authors that overlap with the authors of the paper?

In short, because of blind review, I was forbidden from doing anything that would make my paper reproducible at review time.


You are confounding reproducibility and repeatability. Repeatability is the ability to repeat your exact experiment under the same conditions, whereas reproducibility refers to other people being able to reproduce your experiment under similar conditions and obtain congruent results.

Of course, both are needed for great science. Nonetheless, your paper alone should provide a good enough description of the conditions and methods you used for the paper to be reproducible.

In fact, it can easily be argued that the ability to just run your code instead of re-implementing it according to your description is actually detrimental. To see how, consider that a bug in your code, re-used by other researchers, can easily lead to multiple derivative works finding entirely wrong results. In contrast, if those other researchers re-implemented your method there would be a much lower probability of them doing the same mistake you did, leading to incongruent results and hence raising alarms that probably lead to the discovery of your bug. Although re-implementing your algorithms is significantly more work, the overall quality of our research would benefit from doing so...


> Nonetheless, your paper alone should provide a good enough description of the conditions and methods you used for the paper to be reproducible.

Too bad there's an 8-page limit, then.


8 pages now? It used to be 6. Anyway, 8 pages can fit a lot of explanation/discussion. Also, keep in mind that AAAI is a conference, where you are supposed to present promising research that you are still edging out. Finished works are supposed to be presented in journals, where space constraints are typically more relaxed.

On a personal note: I'm sick of smart-ass reviewers, unprofessional researchers, results over-selling (to put it mildly) and so on. I think the whole system is so corrupted that i just quit research in despair. Unfortunately, I don't see how blogs and/or just "publish the code" would magically solve all those problems.

Anyway, publishing your code is certainly a good thing to, so I applaud and encourage you to keep doing it!


> Also, keep in mind that AAAI is a conference, where you are supposed to present promising research that you are still edging out. Finished works are supposed to be presented in journals, where space constraints are typically more relaxed.

This is a dated view of AI research.

Promising research appears in workshops, blogs, and/or arXiV. Finished research appears in conferences. Nobody's sure what journals are for.


Talk to your advisor about that. Everybody is pretty sure that journals are to support your academic curriculum: good luck getting postdoc positions without Q1's under your belt.

Of course if you are in one of the Ivy league universities this may be different. Otherwise... yeah, you can feel research moves too fast for journals, but curricular evaluation practices moves even slower.


>Talk to your advisor about that. Everybody is pretty sure that journals are to support your academic curriculum: good luck getting postdoc positions without Q1's under your belt.

I agree with GP. The view that conference publications is less important than journal papers is inaccurate, depending on your field.

In my field, it's as you describe. Conference papers are not polished, and journal publications is what matters in your academic career.

For many of my peers in some disciplines in CS, it was the opposite. Getting into a highly regarded conference was much more valued than publishing in a journal.

Then I found this was not limited to CS.

It really just depends on your discipline's culture.


> depending on where you host it, they often don't last long

I wrote my bachelor's thesis last semester. You wouldn't believe for how many papers I needed to use archive.org. By the end of it, I made a small donation and recommended the company to do the same.

Blog posts might not be better, but hosted papers aren't that stable either.


anyone

I think is they key point here which the OP seems to ignore completely: I sort of agree with everything the OP states, but he also forgets some key aspects on blogging. For instance anyone can post whatever even on subjects he/she really doesn't know a lot about. Also, not every blog post does get reviews from all sides and might only get 'yes good' replies originating solely from confirmation bias while the lack of any criticism doesn't make everything right but for instance just indicates the critics didn't find their way to the blog post.


There's an inverse side to that "anyone": professional scientists are under pressure to publish long scientific-looking articles, and their fellow reviewers have almost no motivation to spend any significant time reviewing (no pay, no nothing, just a small non-quantifiable reputation penalty for declining to review).

Whereas most scientific bloggers post things they are interested in and therefore feel have something to say.


professional scientists are under pressure to publish long scientific-looking articles, and their fellow reviewers have almost no motivation to spend any significant time reviewing

You have a point but from all scientists I know myself there's not one for which this applies. Might depend on the field though and I'm not denying there's a bunch of impossible-to-reproduce-crap published and there are rotten apples everywhere. But what I see (in fundamental research which is usually not as publicly known as other kinds) doesn't come near what you describe, even though it's anecdotal of course.. Yes there is pressure to publish but their papers aren't 'scinetific-looking', they're properly scientific and they are just the length needed to describe the findings on the subject. And there is an abundance of motivation for reviewing mostly inspired by wanting to make sure everything is as correct as possible, for the sake of research.


Thanks for the balancing point. My experience describes applied research in such "hot topics" as computer vision and machine learning, and many people, myself included, are more interested in getting the technology to work and apply it in real commercial projects, where paper publishing may seem like a price to pay for research grants subsidizing your R&D.

These grants are aimed at building a working technology and strengthening national economy, so there's nothing wrong with the approach per se, but when your reviewers share your values they have little motivation to find every possible mistake in your paper, at least for second- or third-rate journals (which are still indexed in WoS so are perfectly enough for the funding agency).


The value of content depends on the collective intelligence of the network around it.

Good networks concentrate intelligence by providing feedback and support that helps the network converge on useful, original insight.

Bad/malicious networks destroy it with noise and false information, creating a feedback network that suppresses reality-based original insight.

The process of blogging is irrelevant. So is the process of peer review.

It's the quality of the networks around each area of interest that defines the real value, not the process.

E.g. peer review works well when it's part of a high quality network. When it isn't, it's no better than random posting.


Do not underestimate the effect of tooling on the shape of a community.

Free and gated idea exchange create completely different communities, with completely different problems and advantages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: