It is quite common to infer that a paper from a journal with high Impact Factor (IF) would have a stronger scientific value than a study appeared in a minor journal. Basically, citation numbers are used to gauge scientific influence in the same way as google research results depend on numbers of linking pages. Nevertheless, a huge number of scientists criticized with this viewpoint, and new alternative citation metrics, like H-index, are continuously being proposed.
The June editorial of Nature Neuroscience contributes to the overall discussion, noting that probably most references are transcribed from other reference lists rather than from the original source article. If this is true, we can speculate that most authors do not read the papers they cite, and (foremost) this makes citation counting far less significant. So, instead of citations what about downloads? With the (discutible) assumption that everyone downloading the paper is reading it, can one predict how well any particular paper is cited years after publication, based solely on the number of downloads it receives immediately following its appearance online? Actually the correlation appeared to be linear (at least for Nature Neuroscience papers).
In perspective, the Nature Neuroscience editors precognize that new metrics (such as paper downloads and other "web 2.0" technologies) can find a place in a compilation of aggregated stats, painting a more accurate and informative picture of manuscript influence. In this context, I put the Researchblogging community. Are we bloggers ready to influence citation metrics?