|The revolution has begun|
Tags Post Publication Peer Review PubmedCommons
By: Rafael Najmanovich
November 6 2013
Personal responsibility is essential in all aspects of science and that includes the communication of science. The logical endpoint of this personal responsibility when it comes to science communication, is openness and non-anonymity. I am a strong supporter of non-anonymous open peer review (NAOPR). NAOPR means that whenever you read a paper you can also judge objectivity (afforded by the lack of anonymity) and the quality of the reviews (as they are open) that lead to its publication. Furthermore, if any petty issues (David vs. Goliath) arise due to anonymity the open aspect guarantees that there is a paper trail documenting the issues. I spoke of this several times, for example here and here. But as much as NAOPR will be beneficial, it represents only half of the story.
The other half of the story is what happens with a paper after it was published. I will not discuss here some interesting initiatives such as that of F1000Research that is implementing a system for follow ups and updates (for more info, see here).
What I am really interested to discuss here is PubmedCommons.
Once a paper has been published, no matter what the quality of its peer review is, it becomes part of the corpus of our scientific knowledge. The problem is that many of these papers are wrong. A paper could be wrong in its conclusions, methodology, claims of originality and many other ways. In some cases this is so problematic that the paper needs to be retracted. However, in the majority of cases, the problems are only partial or not severe enough to require a retraction. But still these papers have problems that need to be highlighted and until now the process of highlighting such problems has been very cumbersome. That is until a couple of weeks ago, because now we have PubmedCommons.
PubmedCommons makes it possible to leave comments that become associated to the article in question and that will eventually (once the system is no longer in beta testing mode) appear to any one who searches that article. For the time being such comments are only visible to people participating. Readers of those comments can assign a comment as useful or not. To participate one needs to have already published an article that is indexed in Pubmed. This allows at once to restrict comments to scientist (necessary in my view for the time being at least) and also makes it more fair as those that comment can be also commented upon - one crucial characteristic of the system is that people need to use their real names. Because it is non-anonymous, it helps prevent abuse.
I think the system lacks one major component that once in place will allow PubmedCommons reach its full potential. This component is the ability for every person commenting on an article to designate the article as Approved (A), Approved with Reservations (AR) or Disapproved (D). In the same way as done in F1000Research. A summary of the number A/AR/D could be displayed at the top of all comments. Immediately one can think that the system could be played by commenters making non relevant comments that count equally to the A/AR/D tally. Here is where the already existing useful/non-useful button associated to every comment plays a part. The vote of that commenter can be normalized by the fraction of readers that found the comment useful and reported comments don’t count until assessed by moderators.
I think that PubmedCommons represents a revolution in science. Here are some reasons why:
1. The number of publications in any one field have exploded and it is only going to grow- A/AR/D values could be used as a new way to sort search results.
2. It is difficult to judge the quality of scientists. Number of citations of articles is a relevant measure but not ideal. Aggregated A/AR/D values or any other time dependent or otherwise measure based on them could be used as a peer judgement of the quality of a scientist. In effect, it is equivalent of having had a pool of experts in the field to assess a large number (eventually all) publications of that individual, associating a quality number to each and given short reasons why. If I put myself in the place of a hiring or grant committee, where it is often the case that not a single member is an expert in the specific sub field of the candidate, I would be happy to have A/AR/D values as an extra tool to evaluate the quality of some of the professional output of the candidate.
3. The problems with journal impact factors are immense and so extensively reported (e.g., here) that I will not discuss them in this post. In the same way as A/AR/D values help judge a paper or an author, they could also be used to aggregate the output of a journal for those that care about it. Now, in a world where you can judge an author based on the A/AR/D values of his own output, the only people that will care about the journal aggregated values will be publishers in judging the quality of their editors and reviewers. Most importantly, it will allow journals to judge the quality of the service they provide to authors and society because myself as an author will not worry to publish in a journal with a high traditional JIF as I will not be judged on that but on my A/AR/D values and therefore, can pick a journal based on what they offer to me and to the society at large. Just as a clarification, I personally already don’t consider JIF in my choice of publication venues.
Before I finish I’d like first to remind people of a few things: PubmedCommons will only work if it is massively used. You don’t need to wait for the beta to end to participate, send me the PMID of one of your papers, your name and email and I will invite you. Most important of all, whenever you comment on any paper, invite the corresponding author (whose email appears in the paper) to participate in PubmedCommons to give her/him the opportunity to reply. And remember, your name will be associated to that comment possibly for a lot longer than you will be around, so be objective, be fair, back up your arguments, be prepared to defend your positions publicly.
There is one thing that PubmedCommons will not do for me. It will not make me read less papers in my field. Ultimately, it is my responsibility to judge the quality of a work that has been published in my field and so even if I come across a paper with a very low A/D ratio, I will still judge for myself the paper and all the comments along with it. So in fact, I will be reading more than before, but I will be wiser for it.
PS: There are some technical minor issues with the system that annoy me like the apparent maximum length of a comment. I had to cut one comment into two. This limitation only adds clutter to the system, it is unnecessary and PubmedCommons should remove it.
Comments (1)Login to leave a comment.
Pandelis Perakakis -
|Perhaps the true revolution will happen when we manage to dissociate the evaluation from the publication process using author-guided, formal, open peer review on all OA content before, during and after journal submission.
|Using the Topology of Metabolic Networks to Predict Viability of Mutant Strains (F1000 Evaluation)|
By: Rafael Najmanovich
March 27 2013
Genome-wide metabolic reconstructions have numerous uses. One of which is predicting what would be the effect of knocking out one or more genes or inhibiting the function of the corresponding protein(s). Once a gene is deleted or the protein inhibited, matter flows around other available metabolic routes. Such rearrangements may drastically disrupt the production of biomass or altogether prevent it, signifying that the gene/protein is fundamental or essential. Such proteins represent potential therapeutic targets.
Flux Balance Analysis (FBA) is widely used to perform predictions of gene essentiality . Wunderlich and Mirny introduced Synthetic Accessibility (SA) in 2006 as an alternative method  that is based solely on the topology of the metabolic network. The idea is derived from synthetic chemistry labs where the difficulty in creating a new molecule is measured as the number of synthetic steps necessary to produce the molecule starting from available starting materials. In the case of metabolic networks, the idea is to calculate the number of steps necessary to produce biomass compounds from input metabolites.
The validity of the SA approach in predicting essential genes was verified in E. coli and S. cerevisiae . When a gene is knocked out or its protein inhibited in silico, the SA will necessarily either remain unchanged or increase (even infinitely) reflecting the longer path (or the absence thereof) necessary to reach output compounds using alternate metabolic routes.
SA and FBA are equivalent in terms of accuracy, around 60% and 80% respectively for E. coli and S. cereviseae . We implemented both SA and FBA in our lab and independently verified these results. Furthermore we also tested B. subtilis where a metabolic network exists  and the full list of essential genes is known , obtaining a success rate of 92% with SA (equivalent to the 94% obtained by Oh et al.  with FBA). Wunderlich and Mirny point that the equivalent success rates between FBA and SA suggests the success of the former should be attributed mainly to network topology.
Some advantages of SA over FBA involve the simplicity of the approach (in terms of implementation and execution), not requiring any knowledge of the stoichiometry of reactions (or initial ranges for reaction rates). The latter in my opinion is a very interesting aspect of SA that allows its application to mixed networks that integrate gene regulatory networks, metabolic networks and other cellular processes that are more difficult to define in terms of stoichiometry and reaction rates.
1. Orth, J. D., Thiele, I. & Palsson, B. Ø. What is flux balance analysis? Nat Biotechnol 28, 245–248 (2010).
2. Wunderlich, Z. & Mirny, L. A. Using the topology of metabolic networks to predict viability of mutant strains. Biophys J 91, 2304–2311 (2006).
3. Oh, Y.-K., Palsson, B. Ø., Park, S. M., Schilling, C. H. & Mahadevan, R. Genome-scale reconstruction of metabolic network in Bacillus subtilis based on high-throughput phenotyping and gene essentiality data. J Biol Chem 282, 28791–28799 (2007).
4. Kobayashi, K. & Kobayashi, K. Essential Bacillus subtilis genes. Proceedings of the National Academy of Sciences 100, 4678–4683 (2003).
Comments (0)Login to leave a comment.
|In defence of open peer review|
Tags open peer review
By: Rafael Najmanovich
November 7 2012
I wrote the following text from the perspective of being a PLoS ONE academic editor, however, it clearly applies to any journal. Last week was open access week, to keep the moment going, here are my two cents. I am sure that I am not the first to suggest what follows but frankly I didn't have the time to go search. Apologies for any unassigned credit.
Whenever I review a paper, I stand by my comments and would have no problem if the authors knew my identity. Further still, I wouldn't mind if the whole community saw my review. While reviewers and editors strive to detect and prevent the publication of articles with errors, a lot of what is published, even in PLoS ONE, is still probably wrong. Many scientists and certainly a lot of people in society at large, assume that all peer-reviewed published papers need to be considered 'true' in some sense. As if the peer review process were some sort of final word.
One of the advantages of PLoS ONE over other journals is that the importance of a paper (not how much it is cited) is supposed to be judged by the community after publication. Somewhat in the lines of Arxivs.
I believe that if PLoS ONE were to make reviews and editor decision letters public, even if (although not ideally in my view) maintaining reviewer identities anonymous, it would add to the quality of PLoS ONE as it would make possible for the community to take in consideration (and criticize) reviewer and editor comments, adding to the openness of the peer review process.
This would have some desirable unintended consequences. First, reviewers and editors could start to be assigned a quality factor that may influence future assignments relative to the subject areas that they say to be experts in. Second, in the case of non-anonymous reviews, it will make possible to openly assign credit to reviewers for the important work that they perform - with the possibility also that the cumulative effort of a reviewer be taken in consideration in any judgements related to career advancement.
Secrecy is often cited as necessary to allow reviewers to be objective and fair in their review without fear of feuds or revenge in case of harsh but necessary comments or rejections. However, more often than not, secrecy allows reviewers to get away with unfair demands/attacks: 'i am given hell to publish, will do the same', 'i have an undisclosed conflict of interest and will review this paper anyway to advance my own interests', etc. While it is a responsibility of the editor to prevent these problems to happen, some undoubtedly pass unnoticed. The advantage of making reviews known is that it may allow readers to judge for themselves the demands of the reviewer and identify attacks. Further, if the reviewer identities are known, this may prevent all these problems with secret peer-review to happen in the first place.
In my view the problems with secret peer-review out-weight the benefits. However, it is so engrained in scientific publishing culture that a good first step would be to make reviews public even if reviewer identities remain secret (for the time being).
Any comments, suggestions, critics?
|Loss of ATP-binding: The case of VRK3|
Tags VRK3 inactive active kinases mutations ATP binding
By: matthieu chartier
October 17 2012
Around 50 human protein kinase domains are predicted to be enzymatically inactive (pseudokinases). VRK3 is one of them.
The VRK family is part of the CK1 kinase group a small group of kinases that are very similar to each other in sequence, but very distinct from other kinase groups.
VRK1 VRK2 and VRK3 are part of the VRK family. Of the three of them, only VRK3 is known to be inactive.
- VRK1 is an active nuclear kinase whose substrates include p53, ATF, Jun, BAF, and histone H3, and is involved in cell cycle, chromatin condensation, and transcriptional regulation.
- VRK2 has two splice forms that localize either to the nucleus and cytoplasm or to the ER and mitochondria.
- VRK3 is the only VRK to lack enzymatic activity.
VRK1, VRK2 and VRK3 were monitered during thermal denaturation with and without ATP. VRK1 and VRK2 showed a shift in melting temperature unlike VRK3. This suggests that VRK3 has no binding affinity with ATP.
Interestingly, VRK3 had the highest native Tm of the three proteins. This supports the notion that VRK3 is stable and rigid even in the absence of ATP.
The G-loop, usually glycine-rich, provides conformational flexibility enabling hydrogen bond formation between the backbone of the loop and the y-phosphate of ATP. The lack of ATP binding in VRK3 is likely to be caused in part by degradation of the G-loop motif.
In the G-loop of VRK3, Q177 would form steric clashes with ATP. Also, D175 occupies the ATP binding site (near the region where the phosphate would be), mimicking an ATP phosphate. These and other changes produce a highly acidic ATP binding pocket that is likely to repel rather than accept the negatively charged phosphates of ATP.
The residues present in the G-loop of kinases can be good indicators of protein inactivity/activity.
The adenine ring binding region
In the adenine ring binding region of CK1 kinases, residue D86 usually accepts a hydrogen bond from ATP via its backbone carbonyl. In VRK3 the equivalent proline (P260) has an altered backbone conformation and can no longer bind ATP.
L88 in CK1 donates a hydrogen bond to ATP via its backbone amine. In VRK3, L262 has shifted conformation such that the side chain would sterically clash with the adenine ring.
Furthermore, VRK3 has a conserved substitution to a large hydrophobic residue (F313) in the C-terminal lobe, a position that is conserved as a smaller hydrophobic residue in active VRKs and CK1s. The combined affect of these changes is to fill in much of the region where the adenine ring would normally bind.
1. Scheeff, E. D., Eswaran, J., Bunkoczi, G., Knapp, S., & Manning, G. (2009). Structure of the Pseudokinase VRK3 Reveals a Degraded Catalytic Site, a Highly Conserved Kinase Fold, and a Putative Regulatory Binding Site. Structure (London, England : 1993), 17(1), 128–138. doi:10.1016/j.str.2008.10.018
|Every disease is a rare disease: How to make personalized medicine work (in theory).|
Tags personalized medicine
By: Rafael Najmanovich
October 17 2012
A comment on the formidable BMJ blog entry by Richard Smith: Stratified, personalised, or precision medicine
There is no such thing as a disease, what exists is a person that is unhealthy (leaving the definition of healthy for the reader to decide). Luckily, many such states of unhealthiness in different people present varying levels of similarities at all scales, from general physiological similarities (coughing is common to unhealthy conditions caused by many different sources) to specific molecular causes (sickle cell anemia, one specific SNP in the beta-Globin gene). At some level of specificity and selectivity of similarities, a 'disease' can be defined. Just like we understand that there are differences in the causative agent of a disease (for example multiple variants of the flu virus) and these differences can affect treatment options, there are also multiple differences among the people afflicted by a condition. Well, it takes two to tango, isn't it?
By definition, the more complex a disease is, the more it is affected by multiple factors and therefore, the more it will depend on individual differences among the people afflicted. As our technology improves with personal genomes and in the future hopefully personal transcriptomes and proteomes, it shall be possible to recognize that every disease is personal, not just from a sociological and psicological point of view, but from a causative point of view. As expected, commonalities between people and diseases mean that different conditions can be understood using model systems, others can only be understood and treated in a unique way in humans, others still may harbour crucial differences between ages or genders or particular human populations. There is a continuum of different levels at which particular conditions can and should be understood and then treated. In many cases we don't yet know that we are grouping together people that shouldn't be treated the same. As our knowledge of their differences vis-a-vis disease states increases, we shall be able to split these common groups and treatment forms.
The economic difficulties facing personal medicine have been very well described in the blog post above. What I want to suggest here is that as we better understand the differences above, the bottleneck will be regulatory and ethical issues regarding human trials.
As an extreme case, imagine some disease today recognized as rare afflicting one single patient in the world. Lets say that some research group has the will and the means to develop a new therapy for that. Well, if I was the patient, and if my condition meant that my quality of life would be worst without intervention as it could be taking in consideration the potential risk of undergoing treatment, well, I would do it. The same is already true in other areas of medicine, patients often weight the risks vs benefits of complicated, dangerous and often unproven/little-useful surgical procedures. So why not do that with the development of novel therapeutic procedures that are a little more sophisticated than a knife?
It turns out that we already do that for terminal diseases based on compassionate grounds with FDA's nod of approval. But why not extend that to other non-terminal conditions?
In an imaginable future we will have access to the full extent of all the 'omic' (genomic, transcriptomic, epigenetic, proteomic, etc) particularities of an individual, and with time we will learn more and more how these relate and affect the entire system with the integration of structural and systems biology methods. It may sound far fetched but it is not, the technology exists and every human is a finite, although perhaps ever changing, machine.
As the number of biological molecules is finite, the number of interactions between biological molecules is also finite. Can we envision a situation in which we are capable of intervening in a safe way on a single molecular interaction at a time?
As we approach this imaginable future and we learn more and more that each disease is unique to its bearer, we will need to test novel therapeutics in smaller samples, that will unlikely be applicable to other groups of people in the future. How can we test these therapies? Clinical trials will not give clear answers any longer due to the lack of proper controls and small samples. What will be necessary is first and foremost, ensure that toxicity is acceptable for any proposed therapy. After that, well, its a clinical trial with one patient at a time. Like in the old, old, old times...
Comment added on Oct 23: This link shows how we can move on to more personalized medicine with extra investments. There is a wealth of data waiting to be discovered in previous unsuccessful clinical trials.
Comments (0)Login to leave a comment.
|Regorafenib: a new multifunctional kinase inhibitor approved by FDA|
Tags kinase inhibitors polypharmacology metastatic colorectal cancer
By: Rafael Najmanovich
September 27 2012
A new multifunctional inhibitor against VEGFR-2 (a.k.a KDR) and TIE2 kinases developed by Bayer and Onyx has been approved by the FDA to treat metastatic colorectal cancer. This represents a step forward in polypharmacology where I always encounter the same argument against it: "Why not 'simply' develop two inhibitors"? (single quotes placed by me). Answer: as it it wasn't hard enough to create one, two is, well, twice as hard. There are other minor arguments about the treatment being easier for the patient to follow. Anyway, the the structure of the kinase domains of the two kinases involved are known and I shall try to dock the drug later to see how it may bind to both proteins. Interestingly, both proteins have been tested by Davis , and so have been their two closest neighbours in the kinome tree (see attached figure produced using our kinome render tool). I wonder how specific is Regorafenib... As a matter of fact, not that specific at all . In fact, it seems to be as much a shotgun as most other (kinase) inhibitors.
1. Davis, M. I. et al. Comprehensive analysis of kinase inhibitor selectivity. Nat Biotechnol 29, 1046–1051 (2011).
2. Wilhelm, S. M. et al. Regorafenib (BAY 73-4506): a new oral multikinase inhibitor of angiogenic, stromal and oncogenic receptor tyrosine kinases with potent preclinical antitumor activity. Int J Cancer 129, 245–255 (2011).
Comments (0)Login to leave a comment.