Tuesday, July 30, 2019

Personal Thoughts on Collaboration and Long-Term Project Planning: Long-Term Maintenance / Support

I decided to go ahead and post this because of an article by Adam Siepel that I read today, describing broader need to take maintenance / support into consideration for projects (and I previously already had most of this content in a draft).  For example, I thought it was interesting that he brought up the R50 Research Specialist Grant.

In terms of my own personal experience, needing to provide support for COHCAP outside of working hours (back in 2018) was one factor that made it clear to me that the "templates" would have issues with support if I continued to expand topics of research at that previous rate (and that I needed to focus on fewer projects more in-depth).

That said, my individual opinion is that it would be inappropriate to convert COHCAP to have a fee-based license, because having a variety of free programs that can be tested for each project has been very helpful to me (and I recommended testing both COHCAP and methylKit for projects, since I couldn't guarantee any strategy would work out for any particular project).  My impression of DNA.land was also somewhat similar: I think it is good as a free option, but I don't think it would be appropriate to charge for the results that I saw (so, I hope I have misunderstood something about their transition plans).  However, that leaves open the question about what should be done for support (to avoid accumulation of over-time hours as more algorithms are developed).

One thought is that maybe suggest a donation to City of Hope for $10 per project, whenever users find the software helpful would be appropriate (or for some possibly larger amount, from non-scientists that want to keep open-source software free but maintained).  However, I would guess that I have already raised a larger amount through (general) matching funds, so I don't think this is especially urgent.

Otherwise, in terms of alternative funding strategies (instead of patents / licences), these are some ideas that I had:

a) Charge for in-person training of open-source programs / databases?  Allow public (and possibly delayed) free support but charge if providing private support via e-mail?

b) Maybe have small grants for software training / support? Perhaps have a target of a 50-70k salary for a bioinformatician in the lab?  If that is not enough, consider a 100-150k grant for 2 support staff for 1 program (and also encourage users to participate in discussions with other analysts, such as on Biostars, to get a variety of opinions).  This was also briefly discussed in this article on how to support open-source software.  More recently, I think this would also be like the Chan Zuckerberg "Essential Open Source Software for Science" grant.

I hope this doesn't become an issue (like for KEGG, or RepBase, as I understand it), but I noticed that there is a NCBI link to OMIM as well as the OMIM.org link that suggests a donation.  So, perhaps a mix of donations and grant funding covers their needs?

Even in terms of the above options, I tested out the "Developer" support for AWS, but I didn't actually get a response within 24 hours (and ended up solving the problem on my own after that, and reverting back to the free "Basic" plan): so, if you do charge for support, you have to be able to be capable of having prompt, daily discussions to work toward being able to solve user difficulties in a variety of contexts.

As another example, I recently canceled my subscription to the New York Times.  At $4/month, the cost was reasonable.  However, due to the extra effort to view the articles on public computers, I essentially stopped reading the articles.  When I thought about it, I already donate $3/month to Wikipedia.  So, the cost isn't really the limitation: the barriers added by the license / subscription (and the relatively good content that I can get for free) are the reason that I stopped supporting the New York Times.  If they did something similar to Wikipedia (at least for some articles), then I would support them (and, likewise, perhaps I should increase my donation to Wikipedia).

I also have these ideas about limits / suggestions for the use of commercial bioinformatics software, which is kind of like an extension for this post (and this was also discussed in the Genome Biology paper that I read when I first made the post public).

Change Log:
7/30/2019 - public post date
7/31/2019 - trim 1st and 2nd paragraph; add NYT example
8/5/2019 - add matching link and de-emphasize grant
8/6/2019 - minor change
8/12/2019 - add link to RepBase subscription
9/11/2019 - change assumption that donation requires non-profit model
9/16/2019 - add link for DNA.land
11/21/2019 - add link for Chan-Zuckerberg open-source software funding

Sunday, July 28, 2019

Favorite Book Recommendations

I wanted to mix in a fun blog post :)

In alphabetical order by title:


  • "Every Patient Tells a Story" by Lisa Sanders
    • A technical adviser for House describes the creative process of discovering causes and solutions to patients' ailments.
    • Another option is "Complications" by Atul Gawande, which is a set of experiences from a surgeon (some of which are about general medical practice / training)
    • Yet another option is "How Doctors Think" by Jerome Groopman.  All 3 books describe case studies, but I believe there is more of an emphasis on type of "cognitive mistakes" in this book (which I think is useful, even if you are not very familiar with medical terminology).
  • "Jurassic Park" by Michael Crichton
    • What can I say? I loved dinosaurs as a kid, and I still enjoy reading this book.
  • "Reason for Hope" by Jane Goodall
    • A very interesting novel where moral (and spiritual / religious) perspectives are given in the context of chimpanzee behavior (and human treatment of animals) as well as some biographical information about Jane Goodall's early life and family
  • "Siddhartha" by Hermann Hesse
    • A good, short book about the need for life experiences to provide understanding for moral leadership
  • "The 7 Habits of Highly Effective People" by Stephen R. Covey
    • Recommended to me by my aunt, I liked this book in terms of emphasizing a need to be motivated by something other than money (such as a "mission statement"), often specifically in the context of running a business
    • I like the concept of developmental stages from "Dependence" to "Independence" to "Interdependence"
    • Relationships in personal life are also described
    • I also think the use of the word "Habit" is important, because it is frequently emphasized that there are no quick fixes for long-term interpersonal problems
    • I also like "Crucial Conversations" by Grenny et al. and "Set Boundaries, Find Peace" by Nedra Glover Tawwab
    • If you are looking for something shorter, then I would recommend "How to Reduce Workplace Conflict and Stress" by Anna Maravelas
  • "The Language of Life" by Francis Collins
    • I believe this provides a very well written overview of genomics research (with fair representation of what can and cannot be achieved, which I think is still mostly true)
    • I also consider "The Genome Odyssey" by Euan Angus Ashley to be a good option.
    • If you are looking for something newer (and longer), I also recommend "She Has Her Mother's Laugh" by Carl Zimmer.
    • "The Gene" by Siddhartha Mukherjee is another alternative option
      • This book also has a PBS special
      • I purchased the large-print edition (by accident, but I think that made it a little easier to read), so my page numbers are probably off compared to others.  However, the relevant content from this Twitter discussion is in the chapter "The Future of the Future".
    • As yet another option, I found "Resurrection Lily" by Amy Myer Shainman to be an excellent collection per personal stories for herself as well as at least one friend and family member (and the book is named after her grandmother).
      • While it doesn't include anywhere near the detail for those case studies, I also have some notes about the related population-level statistics here.
  • "Zoobiquity" by Barbara Natterson-Horowitz and Kathryn Bowers
    • Co-written by two authors, primarily described from the perspective of an MD (a cardiologist / astrophysicist at UCLA who sometimes assists with veterinary work at the LA Zoo).  I think this provides a writing style that is a little different than some other books about wildlife.
    • Provides a number of interesting examples, including insights into disease / behavior through comparative medicine as well as pathogens that can infect both people and other animals.  For example, I learned that it was a pathologist at the Bronx Zoo that corrected an initial misdiagnosis for St. Louis Encephalitis for the West Nile outbreak in 1999 .  This is also described in a summary from the One Health Commission (which is something else that I learned about).  Creation of the National Center for Emerging and Zoonotic Infectious Diseases was also described.

Change Log:

7/28/2019 - creation and public post
10/20/2019 - add 7 Habits
12/6/2019 - add link to Carl Zimmer book
12/25/2019 - add link to another communication book
3/29/2020 - add Zoobiquity to the list
7/5/2020 - add link to "The Gene"
10/25/2020 - add link to "Resurrection Lily"
12/26/2020 - add link to "How Doctors Think"
3/26/2022 - add link to "Crucial Conversations" and "Set Boundaries, Find Peace"
3/5/2024 - add link to "Crucial Conversations" and "Set Boundaries, Find Peace"

Friday, July 26, 2019

Personal Thoughts on Collaboration and Long-Term Project Planning

Our opinions can change over time, and some long-term effects may not be noticeable until 5+ years of experience.

While I still don't think I have everything figured out, I am using this page to organize my thoughts on some topics that may be of use to the broader community.  I also hope that the update/change logs may also be helpful for giving credit to feedback from others during discussions.

Nevertheless, for these posts, I am going to try and focus on what I believe I understand most clearly:



Again, it is probably a little early for me to be giving advice (since I don't have a solution worked out for myself yet), but I hope sharing my experiences can be helpful to other people as I sort out the details for figuring out a sustainable workload for myself.  Having the patience to work on agreed processes step-by-step is also important, but I believe some of this information may be important for future changes (even if they don't occur in the immediate future).

To be clear, I very much enjoy working with collaborators as a Bioinformatics Specialist in a Core Facility.  So, while some of what I am saying indicates room for future improvement, I have an overall positive impression of my work environment and the researchers that I have worked with (who are passionate about helping other people).

Plus, even though I think some of this content is important for long-term discussions, I also want to emphasize that you can be genuinely proud for putting in your best effort to help people and there is some need for short-term support (such as a temporary difficulty in getting additional funding) or at least giving people the chance to think carefully about whether a more major transition is necessary.

Update Log:

7/26/2019 - public post date
7/29/2019 - trim down introductory paragraph
7/30/2019 - add link for maintenance / support, and modify preceding sentence
8/4/2019 - minor edit after some proofreading by a family member
8/6/2019 - minor changes
4/30/2020 - add link to code / data sharing details (either required or suggested)

Personal Thoughts on Collaboration and Long-Term Project Planning: Post-Publication Review

I have another post more broadly describing the importance of comments / corrections that I have self-imposed on my papers, as well as thoughts about the science-wide error rate.

However, those are not all from work I did in a shared resource at City of Hope.  So, I thought I should summarize a subset of those points here:


  • COHCAP comment #1: correction of minor typos (now upgraded as a formal corrigendum
  • COHCAP comment #2: my personal opinions emphasizing the following points
    • "City of Hope" should not have been used in the algorithm name
    • I've more recently gained better appreciation for the need to have testing of methods for every paper (so, I mention that readers should not consider the best COHCAP results to be completely automated).  Given that COHCAP stands for "City of Hope CpG Island Analysis Pipeline" this is relevant to my discussions of "templates" versus "pipelines"
  • COHCAP comment #3: while the Nucleic Acids Research editors were very helpful in encouraging me to look more closely at a discrepancy in the listing of the machine for processing the 450k array, they declined to post the comment because it was ultimately determined to be an error in the GEO entry rather than the Supplemental Materials for the COHCAP paper.
    • I mention this in a little greater detail on Google Sites; however, I was able to confirm that the HiScanSQ (not the BeadArray) was used to process the samples because i) the BeadArray is not capable of processing a 450k array and ii) City of Hope never owned a BeadArray.
  • 2nd Author Correction: Table #2 was wrong (duplication of table #1, although the table description was correct)
  • 2nd Author Comment #1: Use of the phrase "silhouette plot" was not precise
    • While this could potentially be an example of a concern for a bioinformatician within a biology lab, I worked on this paper when I was in the COH Bioinformatics Core.  So, I think the most important lesson is to develop habits where you stop whenever you encounter something you don't know, and set a pace (and total number of projects) where you expect to have to take some time to learn more about what you see in the literature (and how to ask the right / best questions to collaborators that are likely also busy working on multiple projects).
  • 2nd Author Comment #2: Use of the phrase "silhouette plot" was also used in another paper, which was published before this 2nd author paper (even though this project was started first)
    • I think this is important in terms of better appreciating the interdependence of labs supported by the same staff member (although I have started try and have acknowledgements for templates, and making notes in follow-up analysis whenever code is copied between labs prior to publication).
  • Middle-Author Papers
    • It is important that I am fair to everybody (regardless of whether they are a collaborator).  However, I also realize this is a sensitive issue that requires some additional internal communication.
      • So, I have reduced the amount of details for these examples.  While I think there has been at least 1 correction that was initiated more than a year ago, I am (slowly) continuing to follow-up whenever something is or was not correct.
      • Sorting through the details for corrections is like managing the correct workload for new projects.  If I try to figure out what exactly happened with too many papers at once, I will be more likely to make mistakes.  So, at any given time, I try to focus more on ~3 issues that I know about.
      • In other words, I will be honest if asked about any errors (or potential errors).  However, if I have the advantage of being able to have discussions with people who I know better, then I think it is probably wise to focus on that as much as possible.
      • I am willing to add a link to notes about middle-author papers.  However, if it is possible to wait until everything on that list has been corrected, I think that may be preferable.
    • So far, I don't think that I caused most of the middle-author paper errors, but I made some mistakes for middle author papers.  So, I provide a couple examples omitting some specific details below:
      • GEO Sample Label Update: Since GEO doesn't have a change log, I thought I should mention there was one prostate cancer project that I helped prepare for a GEO upload whose GEO labels were not ideal (even though the patient IDs for sample pairing for the sample were correct, the samples should have been called "sample" rather than "patient," and that has been corrected).  This was not a huge problem, but most other GEO corrections are due to me not knowing about the machine (so, they were errors from somebody that I didn't catch due to a gap in my knowledge).  So, to be fair, I thought I needed to mention this because I was the one who accidentally created the error (rather than passing along somebody else's error).
      • GEO Machine / Base Calling Methods Update: There was at least one submission where machine and methods needed to be updated, both of which involved at least some previous misunderstanding on my part.

If I were to give advice to my previous self, I would say it is important for the project lead to understand the full project (and plan to spend a substantial amount of time revising and critically assessing your results).  If there is something that you don't understand, do everything you can do discuss with the other authors prior to paper submission.  After all, you will likely have to give at least a partial explanation to people asking about your project, such as face-to-face discussions where co-authors may not present.  It is also important to capture the the full amount of work required for a paper (including post-publication support).

You don't necessarily have to be a project lead to need to plan for an appropriate workload, although taking responsibility for a paper is much more difficult if you aren't a project lead (if you caused the mistake, then somebody else may experience more severe consequences for your mistake).

I think it is also important to emphasize personal limits (and the solution it provides).  If your optimum workload is 5 projects and you work on 10 projects, then you are going to encounter difficulties.  However, I think it can then help if you take your time and gain a better intuition about what you don't know (and therefore what you either need to spend more time on or possibly focus less on overall).  I admittedly still have to figure out exactly what produces the best work-life balance, and I think you have to wait to notice some of the accumulation of follow-up requests and/or post-publication support / review.  However, I think I have gotten a better feel for what that "optimal" day is like: I just have to figure out how to consistently have that each day (on the scale of years).  In other words, if you are feeling overwhelmed, then I would recommend focusing on previous positive experiences as hope that you can improve by decreasing your responsibility / workload.  I also needed to learn to recognize and manage stress better (sometimes with medication).

I think a lot of what I described above can also just be simple mistakes.  For example, I can tell that I make more mistakes if I work overtime on a regular basis or if I haven't been well-rested.  While I didn't exactly cause all of the errors that I described above, I think it is necessary for me to take responsibility whenever I was first author (or equivalent).  If I can describe myself as precisely as possible (which I realize is still a work in progress), then I hope that can also help others as well (for every level of collaboration in putting together a paper).

P.S. There were 2 general points (previously under the "middle author" section) that I think may be better to move to another blog.  I have already move that content (and I will provide links here when public).  However, in the meantime, I would say those fall into the categories of i) what is the best way to correct minor errors (which you can see in this ResearchGate discussion) and ii) explain the need and estimate of time required to provide data and code needed for a result to be reproducible.

Update Log:

7/26/2019 - public post date
7/27/2019 - revise concluding paragraph
7/29/2019 - move majority of concluding paragraph back to a draft; try to be more conservative / clear with commentary
7/31/2019 - add link to COHCAP corrigendum
8/1/2019 - mention there will need to be additional corrections
8/5/2019 - minor changes (+ add back in concluding paragraph, followed by additional trimming/revision)
8/6/2019 - minor changes
8/13/2019 - mention GEO update
9/19/2019 - mention data deposit and code sharing
9/20/2019 - expand middle-author section; minor change
9/21/2019 - minor change
9/28/2019 - add experiences learned from IRB / patient consent process
9/29/2019 - fix typos; reword recent changes
10/02/2019 - add Yapeng link
10/15/2019 - add note for gene length calculation, as well as another link to blog post (with some separated content in this post)
11/1/2019 - mention ChIP-Seq issue
1/28/2020 - add intermediate set of ChIP-Seq notes
4/4/2020 - minor changes + reduce middle author content + move general points
4/24/2020 - add link to ResearchGate discussion
9/7/2020 - minor change (removing some specific information)
12/16/2020 - add another middle-author example without any details (shifting from specific to general)

Personal Thoughts on Collaboration and Long-Term Project Planning: Staff in Shared Resources

While I am still having discussions to understand and precisely describe my 10+ years of genomics experience (and the influence on current projects, particularly those started Post-2016), I have some opinions that I would like to share for broader discussions:


  1. If bioinformatics support comes from a shared resource, I think there may be benefits to limits on percent effort (as a fraction of split salary) and/or PIs supported by individual staff members.  A colleague kindly referred me to this paper that recommended a minimum limit of 5% effort.  My tentative opinion that there may be a benefit to having a maximum limit of 4-5 PIs (although the specifics probably vary, depending upon whether the analyst has a Master's Degree or a PhD, for example).
    • For me, I feel very comfortable in saying that I need to have a maximum limit of 3 average difficulty projects per day.
    • Also, if possible, I believe that gaining in-depth knowledge on a limited number of projects should help with publishing higher impact journals and/or novel methodology.
    • I think this matches the minimum level of effort for NCI awards, as well as the concept of having a conflict of commitment.  I also learned about both of those at work.
  2. I believe the PI / project limits for shared staff should be more strict when developing software that requires long-term support.  It is important to remember that the average (or minimum) amount of time per project will be inversely related to the total number of projects.  So, if you want to provide prompt feedback to users of your software (and fair support for all projects handled by an analyst), this needs to be taken into consideration when scheduling projects.
  3. If an analyst is supporting multiple labs, I believe the best-case scenario may involve splitting time between labs that knowingly collaborate with each other.  For example, if the set of projects among all labs is known, that may help scheduling submission (and expected revision) for papers that are expected to be published in the highest impact journals.  Regardless of whether this is done, the ability to support any given project is dependent upon concurrent support of other projects, and I think that is at least something that needs to be kept in mind.
  4. I think there should be transition periods when making changes in staff support.  While I'm not 100% certain about the details, I think a yearly or quarterly review may be a good idea.  I don't believe it is a good idea to make support decisions on a daily or weekly basis, and it often takes me 1-2 months to feel comfortable with a project that has been was previously worked on my somebody else.
  5. At least for me, it helps to have somewhat frequent discussions / analysis to help remember the details for a project.  For example, I would probably recommend having at least monthly discussions (and I think weekly or daily discussions / analysis is probably preferable).
  6. Likewise, I can discover and fix errors when spending a substantial amount of time for critical assessment of results.  For example, my recommendation would be to expect to have 10 "rounds" of analysis (hopefully, some of which includes creative / custom / novel analysis).
    • This isn't perfect, but I know I have found (and corrected) some errors this way.  This is kind of similar to being able to make new discoveries (or correct additional errors) when you re-read a paper.
  7. It is my opinion that specialized protocols should be an area of expertise for individual labs (which may or may not be offered externally), but this should not be a responsibility / service of the core.
That said, I am expecting the "Update Log" to reflect at least a few additional changes (as I have more discussions - primarily internal at first, but I think some broader feedback is also valuable).

While I am primarily focusing on the project management part of shared staff in the points above, the concept of an optimum workload can be important in various situations.  For example, if your optimum workload is 5 projects and you are trying to take on 10+ projects, you will either do lower quality work or not have an even distribution of time on projects.  However, that does indicate a possible solution: if somebody is having difficulties with supporting a certain number of projects / PIs, they may actually be able to be capable of producing excellent quality work if they focus on fewer projects more in-depth.

Update Log:
7/26/2019 - public post date
9/14/2019 - add note on discussion intervals
9/16/2019 - remove nearly duplicated sentence
9/23/2019 - mention being able to catch errors as I perform additional analysis for a particular project
9/28/2019 - add point / opinion #7
4/16/2020 - add link about effort limits and conflict of commitment
4/17/2020 - minor change

Personal Experiences with Comments and Corrections on Peer-Reviewed Papers

So far, I have at least 5 publications with examples of comments and/or corrections:


  • Coding fRNA Comment (Georgia Tech project, Published While at Princeton).
    • It is important to remember that anything you publish has a certain level of permanence (and you can be contacted about a publication 10+ years later).
    • On the positive site, I think it is worth mentioning that your own desire to work towards helping people and being the best possible scientist is important: for example, peer review can help if you do everything you can before submission, but I recently provided more public data and re-analysis that I think improves the overall message for this paper (from my own initiative; for example, please see this GitHub page, with PDFs for text and comment figures).
    • So, I truly believe peer review can help, but I think personal responsibility and transparency are at least as important.
  • Corrigendum for 2-Word Typo in BAC Review (UMDNJ, Post-Princeton, Pre-COHBIC).
    • More important than the correction, I think it should be emphasized that 6 months of working in a lab (especially without ever doing the experimental protocol emphasized) is not enough time to justify writing a review.
  • As 1st and corresponding author, I have issued two comments (and a corrigendum) for the COHCAP paper describing a method for analysis of DNA methylation data (City of Hope Bioinformatics Core).
    • There was also a third comment that NAR decided not to publish (regarding an error with the machine used in GEO, which is described in more detail on my Google Sites page and briefly mentioned in the related blog post).
  • As a 2nd equal contribution author, there was a correction regarding the 2nd table, as well as 2 comments related to imprecise use of the term "silhouette plot" (City of Hope Bioinformatics Core)  
  • [initiated and completed corrections to middle-author papers and deposited datasets] (City of Hope Integrative Genomics Core, Post-Michigan).
I trimmed down the details above because I think the formatting on my Google Sites page is a little better, and I listed specifics for the details of the City of Hope papers in a separate post.  So, given that only two other papers were pre-COH, I thought it may be better to shift this post more towards the higher-level discussion.

We all have other factors that will contribute to the total amount of time on a project (such as allocating some time for personal life), and I would usually expect work responsibilities to increase over time.  For example, if you are scrambling to complete your work as a graduate student, you may want to be cautious about setting goals for jobs that would have an even greater amount of responsibility.

Some people may be afraid of pointing out similar issues in previous papers.  While possibly somewhat counter-intuitive, I think this can help build trust in the associated papers / researchers: if researchers are not transparent in their actions and overall experience with a set of results, that can contribute to public distrust (and development of bad habits that can become worse over time).  Plus, if research is an on-going process of small steps towards an eventual solution, readers should expect each paper to acknowledge some limitations and/or unsolved problems (and a fair representation of results should help in identifying the most important areas for future research).

One relatively well-known example of the impossibility of being 100% accurate in all predictions is that Nobel Laureate Linus Pauling had a PNAS paper proposing a triple-helix structure for DNA.  There was even a Retraction Watch blog post that brought up the issue of whether this paper should be retracted.  I don't believe anyone currently cites that paper with the believe that DNA has a triple-helix (rather than a double-helix) structure.  However, taking time to correct and address mistakes needs to be taken into consideration for project management, and my point is that I am trying to encourage more self-regulation of corrections (since there are papers in relatively high impact journals whose main conclusion is wrong and they haven't been retracted or corrected).

While it harder to pass judgments on other people's work, I hope that I can be a good example for other people to identify issues with their own previous work.  For example, one counter-argument to the claim that most scientific arguments are wrong is the Jager and Leek 2014 paper where Figure 4 shows a science-wide FDR closer to 15%.  In one sense, this is good (15% is certainly better than 50% or 95%), but I think the correction / retraction rate is probably noticeably less than 15% (so, I think more scientists need to be correcting previous papers).  From my own record (of 1st author or equivalent papers), my correction rate is currently a little higher than that Jager and Leek estimate (3/8, or 37.5%) but my retraction rate is currently lower (0%).  I am not saying I will never have a retraction (or additional corrections).  In fact, I think there probably will be at least a couple additional corrections (among my total publication record).  However, that is my own personal estimate, and I would like to contribute to having discussions to try and reduce this correction rate for future studies.

I believe being open to discussion can cause you to temporarily lean towards agreement, as you try to see things from the other person's perspective (even if you eventually become more confident in your earlier claim).  So, even if a reviewer/editor considers a paper acceptable to publish or a grant is OK to fund (within a relatively short period of review process), the post-publication review is a very important (and I think making funded grants and/or grant submissions public and available for comment may also have value).

I also think that having less formal discussions (on Biostars, blogs, pre-prints, etc.) can help the peer-reviewed version of an article to be more accurate (if people actively comment in a location that is easy to check, reviewers take public comments into consideration, and/or journals use public comments to select reviewers that will provide the most fair assessment).  For multiple platforms, the Disqus comment system provides a centralized way to look for commentary for at least one peer-reviewed and at least one pre-print system.  While not linked directly from the paper, PubPeer also provides an independent commentary on journal articles.  I also have some examples of comments on both those mediums on this blog post.

While not the primary purpose, I think Twitter can also be useful for peer view.  For example, consider the contribution of a Twitter discussion to this Disqus comment.  Likewise, I found out about this article about the limits to peer review from Twitter.

I also have a set of blog posts summarizing experiences that describe the need for correction / qualification of results provided to the public for genomic products (although I think catching errors in papers, or better yet pre-prints, is really the preferable solution).  While maybe having something like the Disqus system for individual results (kind of like ClinVarGET-EvidenceSNPedia, etc.) may have some advantages, people can currently give feedback in mediums like PatientsLikeMe (where I have described my experiences here) and the FDA MedWatch.

Update Log:

2/2019 I would like to thank John Storey's tweet to reminding me of Jeff's SWFDR publication (in a draft, prior to the public post)
7/26/2019 - public post date
8/1/2019 - remove middle author link after realizing that there will be additional middle author corrections; add COHCAP corrigendum link; add PubPeer link based upon this tweet.
8/3/2019 - add Twitter links
8/6/2019 - switch link to updated genomics summary
1/16/2020 - add link to Disqus / PubPeer comment list
4/4/2020 - minor changes
7/11/2020 - minor changes
3/20/2022 - add Oncotarget RNA-Seq comment
 
Creative Commons License
My Biomedical Informatics Blog by Charles Warden is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 United States License.