To view this page ensure that Adobe Flash Player version 11.1.0 or greater is installed.

This column will most likely be appreciated more by the academic members of SIOP than the practitioners. How- ever, all members of SIOP will understand the fundamen- tal issue I am discussing. It is the story of a performance measure that suffers from criterion contamination. That performance measure is called the Citation Index. Here is how it works. Journal articles conclude with a list of references cited within the article. These cita- tions are tabulated per author to create what is called the Citation Index. All the index tells you is that some researcher’s name was mentioned in a publication. The Citation Index offers no contextual information about why a researcher’s study was cited, that is, whether it was in a laudatory, neutral, or pejorative manner. Think of it as a way to simply quantify name dropping. Most university administrators think the Citation Index is right up there with apple pie and mother’s milk. It is supposedly of irrefutable quality, purity, and importance. Paul M. Muchinsky* Hypergraphic Press, Inc. * Fan mail may be sent to 34 I wish to offer an opposing point of view. I begin with why authors like to drop the names of other authors. In most cases the name dropping serves a strategic purpose. The purpose is to influence a positive edito- rial decision about the submitted manuscript. People love to read about themselves. Dropping the names of people who are on the editorial review board, not to mention the editor, increases the likelihood such peo- ple will vote to further glorify themselves by approving the manuscript for publication. I have read rough drafts of manuscripts by colleagues. More often than not I see penciled notations in the margins stating, “Need to insert a few cites here.” Authors are social- ized to demonstrate they are highly knowledgeable of the scientific literature. The name-dropping phenome- non has reached epidemic proportions. In the past, July 2014 Volume 52 Issue 1