Society for Industrial and Organizational Psychology > Business Resources > Employment Testing > Vendor Questions

mainheader

Seven Questions to Ask a Vendor Before Purchasing a Test  

Youve decided that testing is a viable and useful option for your organization.  You also know what knowledge, skills, abilities, or other characteristics you want to assess.  Now what?  How do you find a good quality test or tests that measures what you want to measure?   

Thousands of employment testing products and services exist in the marketplace.  They vary in many ways including cost, time to administer, and overall quality.  When deciding to purchase a test, managers typically experience information overload.  Obtaining professional help in interpreting testing information is often necessary.   While most managers will not have the time, resources, or background to engage in test development, this material is provided to assist the manager in planning and reviewing work done by professionals. 

To help you make a good decision related to employment testing, ask the following seven questions before purchasing a test.   If a test publisher or vendor cannot or will not answer the following questions, you should be wary.  Knowledgeable and experienced test publishers typically welcome a clients detailed interest in their product.  Test publishers and vendors also want to provide whatever information is necessary to help an organization make a good employment testing related decision. 

1.  What does the test measure?  

Look for a clear and concise answer to this question. It is difficult to develop a good quality test without a clear definition of what is to be measured.  Also, consider how the answer compares with what needs to be assessed in your organization.  Be wary of a vendor who describes their test as measuring many seemingly unrelated factors.  Also be wary of tests that consist of a limited number of itemsfor example a 25-item test that purports to measure 10 different factors.  Measuring a characteristic or quality consistently and accurately typically requires more than just a couple of items.  

2.  What research and process was used to develop the test?  

What was the theory or experience on which the test was based?  Was the test developed on people that are similar to your organizations applicants or employees?  What was the process used to develop the test?  At the very least, this background information on the test is important because it provides information on the logic, care, and thoroughness by which the test was developed.  

3.  What experience and/or education do you have that qualifies you to develop and/or sell this test?  

The educational background and work experience of the persons who developed the test is important, as well as references that can speak to the capabilities and experience of the test developer or vendor.   To have confidence in the test and in the event of a legal challenge, you want test developers or vendors who have education and/or experience related to the specific content of the test and related to test development and validation.  Also, some tests require the test administrator or individuals interpreting test scores to have certain credentials (e.g., MA, PhD) that reflect coursework in statistics, test interpretation, or test development and validation.   

4.  What evidence do you have related to the reliability of this test?  

Reliability refers the consistency of test results.  There are several ways to assess the reliability of a test, and some are more appropriate for certain situations (e.g., when multiple raters or evaluators are involved; if one wishes to know about stability of results over time).   Experienced and knowledgeable test publishers and vendors have information on the reliability of their testing products.  For more detailed information on how to assess the reliability of a test, check out Testing and Assessment:  An Employers Guide to Good Practices.

5.  What evidence do you have related to the validity of this test?  

Validity refers to the accuracy of the inferences made based on test results (e.g., how accurate is it to say that a higher test score indicates that a person is more likely to be a better performer).  Knowledgeable and experienced test publishers typically have many forms of validity evidence.   For example, they may have evidence that shows a relationship between test scores and some outcome of interest (e.g., supervisory ratings of job performance, average monthly sales, turnover).  They might also have evidence that documents a link between the content of the test and the requirements of the job.  Other evidence might include showing how the test relates to other measures of the same thing.  Experienced and knowledgeable test publishers have (and are happy to provide) information on the validity of their testing products. Judgments regarding what types of validity evidence are appropriate for a given test depend on a number of factors, and these are outlined in The Standards for Educational and Psychological Testing.  For more detailed information on how to assess the validity evidence associated with a particular test, check out Testing and Assessment:  An Employers Guide to Good Practices.  

6.  What evidence do you have that demonstrates the lack of bias or discrimination of your test?  

Look for evidence that the test does not contain bias on the basis of race or sex, that is, that the test is related to outcomes in a similar manner for all individuals.  This statement does not necessarily mean that the test will have similar results for different groups of people.  This statement does mean that the test is not a biased indicator of an outcome of interest.  For example, in a typical employment decision context, more women than men will score low on a test of upper body strength.  The test, however, would not be considered biased if women and men with similar scores achieved similar performance on the job.  

7.  What data do you have that will help me interpret test scores in my organization?  

You cannot interpret test scores by themselves.  Whether a test score is considered good or poor may depend on the distribution of scores of a comparison group.  This comparison group is typically referred to as a norm group.  The test publisher should provide information about the different norm groups that are available for the test being considered.  Ideally, you want to use a norm group that is similar to the group of people that are in the position for which testing is being used.  There are other ways to interpret test results including expectancy charts and cut scores, which are developed based on information about how the score relates to outcomes of interest.  Information should be made available on data that can aid in appropriate test score interpretation.