Metrics for Measuring Research Outputs
The last few blogs have focused on measuring your success, and then reporting your success.
Here, I’m looking at some measures and why you might want to focus on them.
Impact Factor
Researchers rail against this. Mainly because it tends to favour certain journals over others. Journals in fields where there are lots of citations. Lots of articles. All have higher impact factors. Impact factor is the average number of citations per article for a journal. It is usually calculated over the last 2 years. League tables and lists of impact factors are produced yearly.
To overcome the bias towards journals, articles, and fields with lots of citations, some people report the impact factor of the journal they published in AND the highest impact factor of their discipline specific journal. Thus, giving the reviewer of your publication list, and idea of the standing of your journal in the context of your field.
This is a useful metric to use as a PhD student as it does not rely on your time in research. Instead it makes a generalisation about the usefulness (citations) of the journal your article is published in.
Citation count
Simply the number of citations of your article. Google Scholar ( mine is here) can do this automatically for you. ResearchGate does so too ( my ResearchGate is here). This is useful to report when your article gets a few citations. Especially if it does so relatively quickly (within days or weeks of publication).
The good (or bad news) is that most peer reviewed published articles only get one citation maybe two. The average is about 1.75. But as much as 40% might NEVER get a citation!
Although individuals report this number, it primarily is a measure of the article. Or aggregated to measure a journal. Some individuals count all citations, and mention that as a measure of their total citation count.
Citation count, somewhat obviously, favours older publications as the score is indefinitely cumulative.
H-index
H-index aims to combine academic productivity (number of articles) and academic influence (number of citations of these articles). It is used instead of impact factor and raw citation count. It is measured for individuals, rather than for publications. Your maximum h-index is equal to the total number of articles you have had published. So, this is a particularly poor measure for junior researchers, and PhD students.
RG Score and Research Interest
These two metrics are ResearchGate reported. You can find them in your stats and scores areas. Research Interest gives you a score for your research article reads, downloads, and citations. It is time limited. And you also get an indication of your position relative to other people using ResearchGate. As expected, it favours people using ResearchGate. Or at the very least, connecting the publications to their ResearchGate profile.
RG Score measures your participation on ResearchGate. It includes publications, but also things like questions, answers, and follows. Again, you can get an indication of your position relative to other people using ResearchGate.
Both measures, but particularly Research Interest, have been railed against because their calculation is a black box and it seemingly relies heavily on citation count (as measure researchers dislike).
Altmetrics
A useful dashboard to show when and where your article is mentioned on the web. Of course, your article needs to have a unique identifier. Often called a DOI (Digital Object Identifier) or PMID (PubMed ID). All peer reviewed publications have a DOI by matter of course. Usually located near the top of the article. You can add an Altmertic badge to your website. Don’t forget to add the DOI to the badge to make sure the data are correct. Or you can just see what your Altmetric data look like by putting your DOI into the badge builder, then clicking on the badge that is built. Here is the Altmetric data for an article I co-authored on rural medical education and rural practice location. This approach (combining academic, and non-academic sources) is useful for young researchers who might have many shares on social media or for research that has a strong practical use but limited academic value.
There are many more options. But these are many that I have seen used on things like bio-sketches, resumes, CV, and websites.
What I like the most, particularly for PhD students and ECRs, is a more qualitative approach. I suggest you limit the listing of your research publications to your best, and then tell me — the reader of your website, blog, CV, resume, and etc., — why that article is in your best list regardless (and without) referring to other metrics.
Dr Richard Huysmans is the author of Connect the Docs: A Guide to getting industry partners for academics. He has helped more than 200 PhD students, early career researchers, and established academics build their careers. He has provided strategic advice on partnering with industry, growing a career building new centres and institutes as well as establishing new programs. Richard is driven by the challenge of helping researchers be commercially smart. His clients appreciate his cut-through approach. He knows the sector and how to turn ideas into reality.
To find out more, call 0412 606 178, email ( Richard.huysmans@drrichardhuysmans.com) or subscribe to the newsletter. He’s on LinkedIn ( Dr Richard Huysmans), Twitter ( @richardhuysmans ), Instagram ( @drrichardhuysmans ), and Facebook ( Beyond Your PhD with Dr Richard Huysmans ).
Originally published at https://blog.drrichardhuysmans.com.