Citation metrics remain the most prevalent method of analyzing a journal's success. They're based on the assumption that when an article is cited by another academic, it's had an impact on their research.
It's often used as a way to estimate a journal's influence on its subject area - and, by proxy, its perceived quality.
But there are many reasons why an academic might choose to cite another person's work. Those reasons don't always reflect the 'quality' of the cited work.
Nevertheless, citations provide a way to measure the extent to which the published academic community has engaged with a given piece of research.
Citation values can be obtained from a number of services, including Google Scholar, Microsoft Academic Search, CrossRef, PubMed Central and Altmetrics. However, the majority of journal analysis is based on multidisciplinary indexing databases, such as Web of Science and Scopus.
Unlike Google Scholar and other autonomous databases, Web of Science and Scopus only index content after it has been reviewed for academic quality. So we know that a citation in one of these databases is derived from academic material.
Citations are counted in a database only if both the cited and the citing article are indexed. This means that citation scores are likely to be higher within larger databases.
For this reason, the same article is likely to have a larger citation count in Google Scholar than in either Scopus or Web of Science. Not because the database has 'missed' citations, but because it does not index all the citing content.
You won't find many measures that simply compare papers by the total number of citations they have received. If you do find one, run away.
For obvious reasons, any measure calculated from total citations will be heavily biased towards older papers. Simply because it has been around for a longer time, an older article gets an advantage.
To combat this, most metrics set a 'citation window': the period of time after an article's publication during which citations will be included in the calculation for that metric.
For example, the impact factor has a 2-year citation window. For an article published in 2013, only citations received in the 2 years after publication (2014 and 2015) will count towards an impact factor.
This means that the age of an article is to some extent controlled. But the metric works for the calendar year (or cover year) so that a paper published in January 2013 has an advantage over a paper published in December 2013.
Most journal citation metrics are a measure of the average number of citations per paper in a given set of articles.
The 2015 Journal Impact Factor, for example, measures the average number of citations received in 2015 to papers published in the previous two years (2013 and 2014).
This aggregation of data means it is not necessarily representative of individual articles within the journal - one may be very highly cited while others have not been cited at all.
When using citation metrics to compare research, there are a number of factors that you need to consider including:
Different disciplines (and sub-disciplines) have different citation behaviors. The Social Sciences and Humanities tend to cite more slowly and cite a larger proportion of books (as opposed to journals) compared with scientific disciplines.
Metrics should not be compared across subjects unless these factors are accounted for.
Review papers tend to attract the most citations. Case studies are often invaluable for teaching or practical work, but tend to be less well cited in academic research.
This doesn't mean that they're poor quality. Usually, 'non-substantive' papers, such as meeting abstracts and editorials, are excluded from the denominator of citation metrics.
Older articles will have more citations. If using a metric that measures 'total citation counts', keep in mind that the metric will be skewed towards older papers or towards more established academics.
There are many sources of citation information (i.e. Web of Science, Scopus, Google Scholar), and the citation scores for a single article are likely to be higher in the largest database (Google Scholar).
Most citation metrics are tied to a single database, but not all are. In these instances, it is important to note the data source.