- Coding fRNA Comment (Georgia Tech project, Published While at Princeton).
- It is important to remember that anything you publish has a certain level of permanence (and you can be contacted about a publication 10+ years later).
- On the positive site, I think it is worth mentioning that your own desire to work towards helping people and being the best possible scientist is important: for example, peer review can help if you do everything you can before submission, but I recently provided more public data and re-analysis that I think improves the overall message for this paper (from my own initiative; for example, please see this GitHub page, with PDFs for text and comment figures).
- So, I truly believe peer review can help, but I think personal responsibility and transparency are at least as important.
- Corrigendum for 2-Word Typo in BAC Review (UMDNJ, Post-Princeton, Pre-COHBIC).
- More important than the correction, I think it should be emphasized that 6 months of working in a lab (especially without ever doing the experimental protocol emphasized) is not enough time to justify writing a review.
- As 1st and corresponding author, I have issued two comments (and a corrigendum) for the COHCAP paper describing a method for analysis of DNA methylation data (City of Hope Bioinformatics Core).
- There was also a third comment that NAR decided not to publish (regarding an error with the machine used in GEO, which is described in more detail on my Google Sites page and briefly mentioned in the related blog post).
- As a 2nd equal contribution author, there was a correction regarding the 2nd table, as well as 2 comments related to imprecise use of the term "silhouette plot" (City of Hope Bioinformatics Core)
- [initiated and completed corrections to middle-author papers and deposited datasets] (City of Hope Integrative Genomics Core, Post-Michigan).
- You can see one example in this PubPeer comment (which is provided because the references were not provided with the correct methods in the formal correction).
I trimmed down the details above because I think the formatting on my Google Sites page is a little better, and I listed specifics for the details of the City of Hope papers in a separate post. So, given that only two other papers were pre-COH, I thought it may be better to shift this post more towards the higher-level discussion.
We all have other factors that will contribute to the total amount of time on a project (such as allocating some time for personal life), and I would usually expect work responsibilities to increase over time. For example, if you are scrambling to complete your work as a graduate student, you may want to be cautious about setting goals for jobs that would have an even greater amount of responsibility.
Some people may be afraid of pointing out similar issues in previous papers. While possibly somewhat counter-intuitive, I think this can help build trust in the associated papers / researchers: if researchers are not transparent in their actions and overall experience with a set of results, that can contribute to public distrust (and development of bad habits that can become worse over time). Plus, if research is an on-going process of small steps towards an eventual solution, readers should expect each paper to acknowledge some limitations and/or unsolved problems (and a fair representation of results should help in identifying the most important areas for future research).
One relatively well-known example of the impossibility of being 100% accurate in all predictions is that Nobel Laureate Linus Pauling had a PNAS paper proposing a triple-helix structure for DNA. There was even a Retraction Watch blog post that brought up the issue of whether this paper should be retracted. I don't believe anyone currently cites that paper with the believe that DNA has a triple-helix (rather than a double-helix) structure. However, taking time to correct and address mistakes needs to be taken into consideration for project management, and my point is that I am trying to encourage more self-regulation of corrections (since there are papers in relatively high impact journals whose main conclusion is wrong and they haven't been retracted or corrected).
While it harder to pass judgments on other people's work, I hope that I can be a good example for other people to identify issues with their own previous work. For example, one counter-argument to the claim that most scientific arguments are wrong is the Jager and Leek 2014 paper where Figure 4 shows a science-wide FDR closer to 15%. In one sense, this is good (15% is certainly better than 50% or 95%), but I think the correction / retraction rate is probably noticeably less than 15% (so, I think more scientists need to be correcting previous papers). From my own record (of 1st author or equivalent papers), my correction rate is currently a little higher than that Jager and Leek estimate (3/8, or 37.5%) but my retraction rate is currently lower (0%). I am not saying I will never have a retraction (or additional corrections). In fact, I think there probably will be at least a couple additional corrections (among my total publication record). However, that is my own personal estimate, and I would like to contribute to having discussions to try and reduce this correction rate for future studies.
I believe being open to discussion can cause you to temporarily lean towards agreement, as you try to see things from the other person's perspective (even if you eventually become more confident in your earlier claim). So, even if a reviewer/editor considers a paper acceptable to publish or a grant is OK to fund (within a relatively short period of review process), the post-publication review is a very important (and I think making funded grants and/or grant submissions public and available for comment may also have value).
I also think that having less formal discussions (on Biostars, blogs, pre-prints, etc.) can help the peer-reviewed version of an article to be more accurate (if people actively comment in a location that is easy to check, reviewers take public comments into consideration, and/or journals use public comments to select reviewers that will provide the most fair assessment). For multiple platforms, the Disqus comment system provides a centralized way to look for commentary for at least one peer-reviewed and at least one pre-print system. While not linked directly from the paper, PubPeer also provides an independent commentary on journal articles. I also have some examples of comments on both those mediums on this blog post.
While not the primary purpose, I think Twitter can also be useful for peer view. For example, consider the contribution of a Twitter discussion to this Disqus comment. Likewise, I found out about this article about the limits to peer review from Twitter.
I also have a set of blog posts summarizing experiences that describe the need for correction / qualification of results provided to the public for genomic products (although I think catching errors in papers, or better yet pre-prints, is really the preferable solution). While maybe having something like the Disqus system for individual results (kind of like ClinVar, GET-Evidence, SNPedia, etc.) may have some advantages, people can currently give feedback in mediums like PatientsLikeMe (where I have described my experiences here) and the FDA MedWatch.
Update Log:
One relatively well-known example of the impossibility of being 100% accurate in all predictions is that Nobel Laureate Linus Pauling had a PNAS paper proposing a triple-helix structure for DNA. There was even a Retraction Watch blog post that brought up the issue of whether this paper should be retracted. I don't believe anyone currently cites that paper with the believe that DNA has a triple-helix (rather than a double-helix) structure. However, taking time to correct and address mistakes needs to be taken into consideration for project management, and my point is that I am trying to encourage more self-regulation of corrections (since there are papers in relatively high impact journals whose main conclusion is wrong and they haven't been retracted or corrected).
While it harder to pass judgments on other people's work, I hope that I can be a good example for other people to identify issues with their own previous work. For example, one counter-argument to the claim that most scientific arguments are wrong is the Jager and Leek 2014 paper where Figure 4 shows a science-wide FDR closer to 15%. In one sense, this is good (15% is certainly better than 50% or 95%), but I think the correction / retraction rate is probably noticeably less than 15% (so, I think more scientists need to be correcting previous papers). From my own record (of 1st author or equivalent papers), my correction rate is currently a little higher than that Jager and Leek estimate (3/8, or 37.5%) but my retraction rate is currently lower (0%). I am not saying I will never have a retraction (or additional corrections). In fact, I think there probably will be at least a couple additional corrections (among my total publication record). However, that is my own personal estimate, and I would like to contribute to having discussions to try and reduce this correction rate for future studies.
I believe being open to discussion can cause you to temporarily lean towards agreement, as you try to see things from the other person's perspective (even if you eventually become more confident in your earlier claim). So, even if a reviewer/editor considers a paper acceptable to publish or a grant is OK to fund (within a relatively short period of review process), the post-publication review is a very important (and I think making funded grants and/or grant submissions public and available for comment may also have value).
I also think that having less formal discussions (on Biostars, blogs, pre-prints, etc.) can help the peer-reviewed version of an article to be more accurate (if people actively comment in a location that is easy to check, reviewers take public comments into consideration, and/or journals use public comments to select reviewers that will provide the most fair assessment). For multiple platforms, the Disqus comment system provides a centralized way to look for commentary for at least one peer-reviewed and at least one pre-print system. While not linked directly from the paper, PubPeer also provides an independent commentary on journal articles. I also have some examples of comments on both those mediums on this blog post.
While not the primary purpose, I think Twitter can also be useful for peer view. For example, consider the contribution of a Twitter discussion to this Disqus comment. Likewise, I found out about this article about the limits to peer review from Twitter.
I also have a set of blog posts summarizing experiences that describe the need for correction / qualification of results provided to the public for genomic products (although I think catching errors in papers, or better yet pre-prints, is really the preferable solution). While maybe having something like the Disqus system for individual results (kind of like ClinVar, GET-Evidence, SNPedia, etc.) may have some advantages, people can currently give feedback in mediums like PatientsLikeMe (where I have described my experiences here) and the FDA MedWatch.
Update Log:
2/2019 - I would like to thank John Storey's tweet to reminding me of Jeff's SWFDR publication (in a draft, prior to the public post)
7/26/2019 - public post date
8/1/2019 - remove middle author link after realizing that there will be additional middle author corrections; add COHCAP corrigendum link; add PubPeer link based upon this tweet.
8/3/2019 - add Twitter links
8/6/2019 - switch link to updated genomics summary
1/16/2020 - add link to Disqus / PubPeer comment list
4/4/2020 - minor changes
7/11/2020 - minor changes
7/26/2019 - public post date
8/1/2019 - remove middle author link after realizing that there will be additional middle author corrections; add COHCAP corrigendum link; add PubPeer link based upon this tweet.
8/3/2019 - add Twitter links
8/6/2019 - switch link to updated genomics summary
1/16/2020 - add link to Disqus / PubPeer comment list
4/4/2020 - minor changes
7/11/2020 - minor changes
3/20/2022 - add Oncotarget RNA-Seq comment
No comments:
Post a Comment