Cracking the Code to High Impact Publishing

Cracking the Code to High Impact Publishing

A few weeks ago, I made a post on LinkedIn celebrating an academic milestone, >10,000 citations over the past 5 years. I acknowledged that citations are not a perfect metric for scientific excellence and promised that I would post an article on what I’ve learned successfully publishing in journals like Science and Nature.

Someone then messaged me on LinkedIn and shared some feedback from one of their colleagues on my post:

"Sounds like half-cook work then forcing others to complete it, pointing out things he was just too lazy to think about, then harass editors. Because he's from a fancy lab/ well known supervisor reviewers feel like they need to play nice and put effort into making it a good review. How many desk rejections would he have gotten without the fancy lab association? It's a slap in the face for other scientists that work their butt off for a well thought out and researched paper, and then are desk rejected because their papers don't fit current trendy key words."

Oof. Tell me how you really feel, eh?  

As I was reading this, I couldn’t help but chuckle slightly because there are certain grains of truth here, especially around the privilege of prestige.

Hard Truths on High-Impact Publishing

I recognize that academia is, in many ways, broken. In my view, the incentives are highly skewed toward individual empire-building via a publish-or-perish mindset that serves to perpetuate other academics – leading to a ratio of 1000 Ph.D. students graduating for every 1 professorship job opening. I know this is not sustainable, and I have my thoughts on how we can improve academia by shifting the focus to job and soft-skills training, increasing exchanges and work placements, and revising graduate curricula to be case-based rather than lecture-based – but that’s a topic for another day.

Before I dive into the strategies that I found to be successful in high-impact publishing, let me lay out some hard truths.

  1. Prestige does matter. A paper from a top university or a famous research professor does get more attention and care than an early career researcher. The rich do get richer, a top university is often better funded with cutting-edge facilities and a well-cited professor has access to larger grants and theoretically better students. An editor is bombarded by hundreds of submissions a week, it’s logical to assume that more prestige = more resources = better research = higher likelihood of being cited.
  2. Multi-disciplinarity > specificity. Publications with multiple highly-cited professors, each in different domains (for example, computational modeling x experimental validation), will likely lead to more citations. The greater the number of fields that could potentially cite a single paper, the greater the attraction of that paper to an editor.
  3. Trends are impactful. The world is changing rapidly, and what is important today may not be as important tomorrow. You can hate “trendy keywords” all you want, but the reality is the world of academic funding and publishing is moving towards a more mission-oriented, grand challenge, problem-based approach. A high-impact paper must always have an answer to “so what?”.
  4. Good science isn’t enough. In an ideal world, the “best” science would get in the “best” journals. The problem is that the definition of what is the “best” is highly subjective. In reality, the most citable science gets into the best journals, again a consequence of the incentive structure of academia. There can be a myriad of reasons why your paper doesn’t get a second look – maybe someone else from a more prestigious university submitted a similar work, maybe its lacking “novelty” like breaking a performance record or being the first of a kind, maybe it has a boring title, or maybe it was the last paper before the editor went to lunch.
  5. Hard work is necessary, but not sufficient. People often discount the amount of luck that went into their success (see self-attribution bias). Make no mistake, I worked hard for every paper that I published, but I also recognize that I had a tremendous amount of luck and privilege. A Science paper from 2016, “Quantifying the evolution of individual scientific impact”, found that impact is ultimately a function of luck and productivity. You absolutely need both.

I can understand why some have qualms with high-impact publishing as I’ve described it. I get it, I’m one of the lucky few who have benefited from this system and advanced my career because of it. If you don’t agree with it, then stop reading now. My advice isn’t for you and what I share below will likely just make you more cynical. This is the system we scientists have, not the one we deserve.

It Always Starts with Good Science

There is a certain bar that needs to be passed to be published in a high-impact journal. One can argue whether a specific publication merits a Science or Nature publication, but in the aggregate, science that is published in these journals is often creative, well thought out, frontier pushing, and novel. You can’t turn water into wine, but with enough care and attention, you can turn grapes into champagne.

A Framework for Impactful Papers

Below is my Impactful Paper Framework which gives tangible suggestions for how to write and publish high-impact papers. This is just a suggestion and is by no means exhaustive, but >10,000 citations in 5 years has to mean something right?

No alt text provided for this image

The Minimum Viable Paper

The Minimum Viable Paper (MVP) is a mindset I’ve used to keep me moving forward during the paper-writing phase which draws from the Agile project management framework. The heart of this concept is to not let perfect be the enemy of good. Start with an MVP that you incrementally improve as the project continues. There will always be another experiment to run, another analysis to do, or another characterization to improve, but you should always ask yourself, “Will the outcome of this experiment change my story? Do I absolutely need to do this experiment to prove my hypothesis?”. As scientists, we are on a continuous never-ending quest for knowledge, but papers are finite deliverables with a beginning, middle, and end. Use an MVP to increase your pace of publication writing, deliver more concise work, and avoid getting lost down the rabbit hole. 

Lead with Figures 

During my Ph.D., we had regular group meetings where everyone would share a slide deck presentation for their project progress and provide peer feedback and questions. The format was always the same, regardless if you had just started or if you were close to submission, you would present 3-5 slides with each slide being a figure. Even when we started a project, we would present what our potential figures could be, which often followed the traditional structure of a story with a beginning, middle, and end. 

The first figure is the context setting figure, typically a chart, graph, or schematic that represents the state-of-the-art or a conceptual process. Figures 2-4 were experimental results figures, often the first was some form of computational modeling, the second was material characterization, and the third was performance data. The last figure ties everything together and can also be a table, which outlines the specific metric (often record-breaking) that you want to highlight. 

Your paper should be coherent and understandable with a clear story using only the figures. The reality is many scientists will skim figures as a way to determine whether the paper is worth investing more time in. Don’t lie, you’ve likely done this too when doing a literature search on a new topic. Invest time into making your figures attractive and useful so that others don’t miss your work. 

Define Metrics & Break Records

This is going to be a contentious recommendation, I can already feel it. Okay here goes… Breaking a record usually leads to a higher impact, and if you can’t break a record that exists, define a new metric where you can. 

Breaking records and being quantitative about how you have pushed the boundary of a field is one of the most consistent ways to get published in a high-impact journal. But what do you do if your results are good, but they aren’t record-setting good? Well, you can find an aspect of your work that outperforms others and highlight that as the metric for comparison. 

Here’s a concrete example from my Nature Catalysis paper from 2018, “Catalyst electro-redeposition controls morphology and oxidation state for selective carbon dioxide reduction”. In this work, I developed a new catalyst to electrochemically convert CO2 into ethylene, the main precursor to consumer plastic. Typically in the field, one uses Faradaic efficiency as a measure of selectivity, the higher a Faradaic efficiency (ex. 90%), the better. My catalyst selectivity was only at 40%, which was good at the time, but by no means broke any records. So I defined a different metric, the ethylene/methane ratio which I used as a proxy for a very specific kind of selectivity, namely C2 vs. C1 molecules. I then plotted my catalyst against other best-in-class results at the time (Figure 1d) to show how much better my catalyst was at this very specific definition of selectivity. It’s even mentioned in the last line of the abstract. 

No alt text provided for this image

Now I know what you’re thinking - Phil that seems sneaky and disingenuous. While I understand your protest, I did not fake data or lie about the results, I simply presented them in an advantageous way. Take a look at some of the highest cited publications in your field and see how many of them defined metrics or records that they then beat in the same paper.

Stick-to-itiveness

You have your paper, you’ve written a killer cover letter that is quantitative with bullet points, is easy to read, with an action title that uses verb tenses, and you feel pretty good about your chances of it going to review. Then you get an email back that starts with, “Unfortunately, we are not able to proceed to review at this time”. 

You have two options at this point and they depend entirely on the feedback from the editor. If you receive an email that reads like an automated response, with no tailored feedback or encouragement, then move on. This wasn’t the right timing or scope for this journal and you’ll get ‘em next time. 

However, if you receive even a sliver of feedback, around scope, content, or anything that shows it wasn’t an automated response - then here’s what you do: call the editor and ask for feedback. If the editor felt it was worth their time to provide you with a bit of feedback on the rejection, it means there’s promise there and with a bit of improvement, the paper may actually go to review. 

When you call the editor (this is key you can’t just email them you need to have a conversation) oftentimes you can try to negotiate to get the paper at least reviewed. You can respectfully plead your case, be empathetic to the editor’s viewpoints, and point to how your paper is part of a larger and more impactful body of work that is coming down the pipeline. Remember that editors are humans too, and rejecting work is often the worst part of their job. They genuinely want to publish your work, you just need to give them more reasons to do so.

Finally, don’t give up on the review process, no matter how long it takes. Publishing in high-impact journals takes a long time with multiple rounds of reviewer iteration. My first Nature paper was published a year and a half after the original submission. This is normal. 

Leverage Feedback

When some scientists receive negative feedback they are often tempted to submit to another journal. The feedback may be requesting additional work and some feedback can bring value whereas others don’t, but ultimately it is your choice to invest the time to get the paper into the journal. When you receive reviewer feedback it may feel unfair and it's only natural to take the judgment of your work personally, but remember that if this reviewer felt this way then others likely will too. I’ve always taken reviewer feedback to heart, and asked fundamentally why is it that my message didn’t get across, or why isn’t the reviewer satisfied with my logic? In scientific publishing, much like in life, it is a learned and very valuable skill to take critical feedback and turn it into positive action. 

Tactically, you may even use the reviewer process to improve your paper, knowing it likely isn’t at the bar you need to publish in a specific journal. This ties into the minimum viable paper concept, intentionally submitting something to get critical feedback to improve the work. There have been many times that as a group we submitted to Science, knowing full well that the likelihood of getting accepted was low, but then used the reviewer feedback to improve the paper and then submit it to Nature

Promote, Promote, Promote

Congratulations! You’ve published work in a high-impact journal. Don’t pop out the champagne just yet, you still have work to do. In this digitally connected world, it is imperative that you share your paper with your network and get as many views and clicks on the journal website as possible.

Journals now have access to advanced digital analytics and they can track traffic, downloads, and social media uptake of papers. This helps editors determine what could be impactful, and they may even put additional resources to promote the work or ask someone in the field to write a commentary on it. The same digital marketing and growth strategies that advertisers use to get you to buy consumer goods can be used by you to make sure people read your paper. Build a personal brand and online following, share links to your paper to your network (I know a professor who would send a mass email to his contact list every time a new paper was published), and invest in a good website. For reference, you can check out my personal website here. 

Final Thoughts

I’ve been going back and forth about publishing this article with my reluctance being largely driven by those who may react negatively to what I’ve had to say. I’ve re-written parts of this article many times, but ultimately I decided to share what I know and the perspective I have.

Again, I recognize that I have had an immense amount of luck and privilege in my career, but I’ve also identified key levers that have helped me be successful. I want to share as much of that with others as possible, because the more people that gain success, the more value that they add, and the more problems that they solve, will only leave us all in a better place. 

Robin Ayoub

AI Training Data | NLP | Prompt Engineering | Multilingual Speech-to-Text Transcription | Chatbot | Conversational AI | Machine translation | Human in the loop AI integration

9 个月

Phil, thanks for sharing!

回复
Fran Kerton

Professor at Memorial University of Newfoundland

2 年

Thanks for sharing Phil

回复
Patrick Duke

Head of Field @ CarbonRun | Researcher | Climate Solutions

2 年

Thanks so much for sharing Phil! Really appreciate the insight!

回复
Bruno Sousa

Entrepreneur & Tech Advisor | Championing Sustainable Innovation | Founder @ Venture Q & CRIAT | Shaping the Future, Responsibly.

2 年

Amazing! Share, Share and Share. 10K+ is no joke. Kuddos on the achievement and thank you so much for sharing your insight in such a great article. Agile/scrum is a great way to organize. It ensures you are always moving forward on any endeavour and I can see it being greatly beneficial to research. We use it all the time for commercial R&D projects and it is very efficient in yielding results. The goal is to achieve excellence, not perfection.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了