Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

Strategic Evaluation:
A New Perspective on the Value of I-O Programs

Dale S. Rose, Jane Davidson,

Jeanne Carsten, and Jennifer Martineau

Last spring in New Orleans a group of 100-plus SIOP members began a network interested in evaluating the impact of our work on business outcomes (aka program evaluation). Over the summer, a group of us (from three different time zones) had a long conversation about the issues we face in trying to do what we had termed Strategic Evaluation. Several of us commented that it seemed very few I-O psychologists were familiar with the specialist discipline of program evaluation and the enormous potential it has for improving organizational effectiveness. Further, it seemed that few I-O psychologists were conducting program evaluations of I-O interventions. We had all shared the experience of talking to I-O psychologists who had never heard of program evaluation and who had never considered techniques for evaluating our programs. Clearly, this was a topic for TIP.

So, what is Strategic Evaluation? Without writing a textbook here, basically were talking about assessing program impact on a set of pre-defined criteria. We do this using behavioral science and other methodologies to answer the question, to what extent did this program (e.g., 360-degree feedback for customer service representatives) achieve outcomes of value to the organization and its stakeholders? This process becomes strategic to the extent that these measured program outcomes (e.g., improved customer service) are aligned with valuable organizational goals (e.g., provide the best customer service in our industry).

Ok, but is that really I-O psychology? Historically, I-O psychologists have focused on methods for developing high-quality programs but placed less emphasis on evaluating them once they are in place. Common I-O methods such as needs assessment, organizational diagnosis, and test validation are tools used to assure the effectiveness of our interventions on an a priori basis. In essence, these methods are the front-end of any intervention: they are intended to make sure we start on the right foot. If validation and other core I-O techniques are the front-end of an intervention, Strategic Evaluation can be seen as the back-end of a project in that it is conducted during and following implementation. From the perspective of Strategic Evaluation, both front and back ends are needed to assure a high-quality intervention that meets organizational needs.

Whereas Strategic Evaluation would likely not be included in most I-O psychologists current definitions of our field, it can be seen as an important complement to the work we do. Most I-O definitions would likely focus on developing solid research-based tools for improving workplace performance. Far from replacing this orientation, Strategic Evaluation adds a systematic, results-driven follow-up to the rigorous program development techniques that are currently the core of our field.

Ideally there should be a close link between developing I-O interventions and evaluating thema self-correcting feedback loop for those inclined toward schematic models of the process. By understanding (at the front end) the outcomes being assessed in a Strategic Evaluation, the intervention can be designed to appropriately address those outcomes. Likewise, it is important for evaluators to understand how interventions were developed. Thus, given the complexity of our field, it is essential that I-O psychologists are involved in evaluating our interventions because others may not adequately understand the methods we employ.

This still sounds awfully abstract, what exactly does Strategic Evaluation look like? Really, it could be any systematic analysis of implementation and goal-related impact that resulted from implementing an I-O program. But, perhaps a more hands on example would be useful. Heres a snapshot of a strategic evaluation that might make it a bit more tangible.

***

A study is currently underway to evaluate the impact of a customer service skills training program. This program is one piece of a larger effort to move the organization toward a more customer-focused business model (e.g., technology and business process changes). The primary goal is to build customer satisfaction, thus customer satisfaction is a key outcome measure in the evaluation. The objectives of the evaluation are to assess the extent to which the training program builds knowledge of the new business model and specific service delivery skills and to assess changes in organization metrics (customer satisfaction) targeted by the new business model. Evaluation measures include: (a) a participant reaction survey and knowledge assessment administered at the end of the program; (b) posttraining self and supervisor ratings of on-the-job skill performance (pretraining supervisor ratings of skill were collected for a sample of participants); and (c) third-party ratings of performance (including both trained and untrained staff), collected before, during, and after the training roll-out customer satisfaction ratings collected quarterly or annually, depending on the business unit.

Additional variables are also being tracked (e.g., work environment, leadership practices), to provide executives with more information about the implementation of the new business model.

The results to date indicate that the program has achieved the stated objectives, based on participant performance on the posttraining knowledge assessment (a pretraining needs assessment had been conducted to establish baseline knowledge). Posttraining self and supervisor ratings of skill performance indicate participants are applying the new service delivery skills successfully on the job. In addition, the posttraining skill ratings are significantly higher than pretraining ratings provided by the supervisors for the pre--post sample.

The third-party performance ratings and customer satisfaction ratings are still being collected, and will provide information regarding the link to the business strategy. While the training program is certainly not the only intervention for improving customer satisfaction, the third party performance ratings and some customer satisfaction ratings are being gathered throughout the process. The trends of these ratings over time can better indicate a link between the program and customer satisfaction, particularly since the training program is being implemented in different groups at different times.

So what are the challenges or barriers to Strategic Evaluation? As with many consultative relationships in our field there are both internal and external evaluation consultants, each facing their own challenges. Though many of these challenges are common to any consultant group, we felt it might be useful to point out some of the unique challenges internal and external evaluators face when implementing Strategic Evaluation.

With a wide range of backgrounds and positions, each of the authors has a unique perspective. Dale S. Rose is president of 3-D Group, a consulting firm that provides external program evaluation services to a wide range of corporate and nonprofit companies. Jeanne Carsten is vice president of National Consumer Services HR Development at Chase Manhattan Bank and is responsible for building assessment and evaluation structures and processes that enhance employee and organization effectiveness. E. Jane Davidson is a full-time faculty member at Alliant University and has conducted both internal and external evaluations in a wide range of organizations in both the public and the private sector. Jennifer Martineau is a research scientist at the Center for Creative Leadership where she uses evaluation processes to enhance the implementation of the Centers client-specific initiatives, as well as fine tune new CCL programs.

Though all the authors had a considerable amount to say on each of these perspectives, we agreed to divide the topics equally. So, after drawing straws (well, it wasnt really that random) we each commented on a range of issues related to Strategic Evaluation.

Fear

Dale. Often, the topic of evaluation is highly emotional and politically charged as most of us associate the phrase evaluate with having a value placed on our work. We think were being graded, and deciding who will do the grading is important. Despite the fact that evaluators do not typically approach evaluation in this way, the fear of evaluation is a big issue every evaluator must overcome. The best method weve found for addressing this challenge is to make the evaluation process one of collaborative exploration toward program improvement. This way were partners with program developers, not program police. Generally, we find that program sponsors and program developers share an interest in making the program work, so we find that evaluation data can be used by all parties to problem solve and find improvements.

Jane. There is something very attractive to both parties about a collaborative approach to evaluation in which the power to draw conclusions and responsibility for the findings rests jointly with the evaluation consultant and organizational members, rather than with the consultant alone. It is less threatening to both sides, and neither party needs to take sole responsibility for the evaluation findings. However, there is also great benefit in complementing such an approach with a less interactive, more independent evaluation.

The external eye gives the organization a number of benefits above and beyond what the collaborative evaluator can add, including: (a) a completely fresh outside the box perspective on its performance, (b) a safer communication channel for the disclosure of sensitive information, and (c) an independent viewpoint (especially useful when the appearance of bias or conflict of interest might be fatal). While the teacher/facilitator role has enormous payoffs for building organizational self-evaluation capacity, as evaluators and I-O practitioners it is our professional responsibility to also be instilling in our clients the value of periodically using an external eye to inject some diversity of perspective into organizational thinking.

Cost

Jeanne. Executives are expected to account for expenses. Evaluating I-O programs enables the practitioner to speak in terms of expense and also in terms of realized benefits and value to the business. While conducting a strategic evaluation adds expense to the program, the added value of articulating both direct and indirect program costs, and direct and indirect program benefits is significant. This is more than an analysis of return on investment, which may ignore important indirect costs or benefits, and may not fully represent the value of a program that is clearly contributing to important strategic goals. In addition, limited organizational resources may be more effectively allocated by using process information for continuous program improvement and identifying the most impactful programs and activities.

Jennifer. There are direct versus indirect costs to evaluation, depending largely on whether an internal or external evaluator is used. With internal evaluators, most of the costs are indirectstaff time, and so forth. But there must be a champion for the work in order to have staff resources dedicated to the work. When the perception is that its free, it is easy to forget that an evaluation will require significant time, regardless of whether it is conducted by an internal or an external evaluator. The internal evaluator is best served by providing the costs of their services, as benchmarked against the fees that would be paid to an external evaluator, as a way of illustrating the worth of their time.

Feasibility

Dale. The continuum from the ideal study to the easy study is not easy to navigate. The important thing is not to throw the baby out with the bath water. Were taught that the ideal (controlled, random assignment, and so forth) is the standard, but I turn this around. Rather than seeing the base line as a highly controlled study, I see it as finding some way to improve on the other option: no evaluation at all. In evaluation, feasibility needs to be considered before thinking about what journal editors might think, not the other way around. The thing to keep in mind is that even a simple but systematic analysis of a program is better than the alternative evaluation: rumor, hearsay, anecdotes, and single case testaments.

Jennifer. I agree with Dale. Often, journal editors are not interested in evaluation studies because they are not tightly controlled. However, this is the real life of evaluationwe work with real people in real situations. Control groups are frequently not available, nor is it feasible to constrain the intervention being evaluated to the point that you can cleanly identify factors within and outside the realm of the intervention. The goal of Strategic Evaluation, however, is to design the most appropriate evaluation for the situation that best serves client needs.

Business Benefits

Jeanne. The evaluation process provides the I-O practitioner a framework for aligning programs with business strategy. Clear and direct alignment with business strategy enables the I-O practitioner to verify that the program is moving the business in the appropriate direction. The information gathered during the evaluation process is used to optimize program quality and make program investment decisions. The information also becomes a management tool for executives in navigating a rapidly evolving business environment. The systems framework offered by strategic evaluation can provide executives with a comprehensive view of the business and the factors that impact important strategic objectives. Progress toward outcomes can be tracked over time, along with critical enablers and inhibitors.

Dale. I once heard an executive say of his 360-degree feedback program, We dont really have a lot of external pressure to justify our program. Were all PhDs and so the organization mostly believes we do good work. I was struck that this perception of evaluation missed the point. Lets consider a simplified example. A well-conceived 360-feedback program may be functioning at a 65% effectiveness level (however measured) which produces some organizational benefits (likely outweighing costs). Because smart and experienced people developed it, this same program may even satisfy its economic buyers, resulting in a lack of internal pressures to modify the program. As I pointed out to this executive, however, a lack of internal pressures for modifications doesnt mean there isnt room for improvement! By using the right data to guide implementation adjustments, the organization may be able to improve the programs performance to perhaps 85%thus realizing a meaningful gain in terms of employee development. Likewise, it may be that the program produces some benefits, but perhaps not those benefits the organization expected. To some extent I think we need to ask ourselves if we are in the business of reacting to internal pressures or the business of enhancing workplace behavior.

Initiating Strategic Evaluation Up Front

Jennifer. Have you ever designed and implemented a high-quality I-O intervention, only to find the client isnt satisfied with it? By targeting Strategic Evaluation up-front and hand-in-hand with the designer of the initiative and the clientgetting the client to the point where they can tell you exactly what they want the intervention to accomplishthe intervention can be designed and developed to meet those expectations. Focusing on Strategic Evaluation at the front-end helps cover our rear-ends at the back-end!

Dale. The biggest mistake we see clients make with regard to evaluation is to treat it as an afterthought. My favorite example happened a few years ago when I got a call in late May with a request to evaluate a training and development program that had been ongoing for the previous 7 months. The program was scheduled to be completed mid-June and the program sponsors wanted to see some results. We managed to provide an evaluation that was useful, but lets just say the evaluation of the next cycle (beginning the following September) was a LOT more useful.

Isnt This Just Validation Research?

Jane. Suppose we consider a case that looks like validation research and use it to illustrate how Strategic Evaluation builds on the initial test development and takes us above and beyond the initial development and validation. Surely the only thing one needs to know about a selection system is whether it improves the organizations strike rate, minimizing both false positives and false negativesright? Wrong! Even with the inclusion of a utility analysis, there are still many more considerations a strategic evaluation would use to enhance the value of the feedback on the system.

First, the impacts of the improved strike rates would be linked to the bottom line and to other important outcomes of relevance to organizational strategy. Second, there would be a deliberate search for the unintended effects (both positive and negative) of introducing the system (e.g., impacts on workforce diversity, or on the perceived attractiveness of the organization to recruits). Third, the selection methods used would be compared with alternative ways of achieving results of similar or greater value. And fourth, performance on all these dimensions would be combined to yield an overall determination of the merit of the selection system, relative to its alternatives.

Thus, we have moved to the wider evaluation question, To what extent does the selection system cost-effectively support the organizations strategy by producing outcomes of the greatest possible value given the available resources? Now that is the kind of information decision makers can really put to work!

Jeanne. A validation study focuses on developing an appropriate decision tool for a specific job and context. A Strategic Evaluation study employs a broader systems view. For example, in a selection context the strategic evaluator will begin with the strategy (e.g., hire and retain customer-service oriented staff). Using the strategy as a backdrop, the evaluator views the labor market, recruiting strategy, selection process, training and entry process, and work environment to identify what should be included in the evaluation. The focus in Strategic Evaluation is how the implementation of these various systems works together with the implementation of the validated decision tool to accomplish the strategic goal of hiring and retaining customer-service oriented staff.

Jennifer. The flip side of this argument is also heardWhy do we need someone (i.e., the evaluator) to tell us something we already know? We know that this intervention workswhy does it need to be evaluated? Part of the purpose for evaluation is to assess and document the impact of an intervention for someone who wont take your word for itthey will want some sort of evidence showing that it does work. Also, organizations need to be able to understand what is working versus what is not working with the intervention so that it can be replicated and revised for future use. In doing so, it is possible to avoid reinventing the wheel, but rather to simply make it better.

***

So there you have it: Strategic Evaluation from A to Z. Well, maybe not all the way to Z. Hopefully, however, we have managed to share this new development in our field in enough detail to spark some enthusiasm and further discussion of the topic.

Concluding Thoughts

By adding Strategic Evaluation to our repertoire, I-O psychologists can greatly enhance our role in promoting organizational effectiveness. Whether conducting evaluation from the inside or as an external evaluator, this method for understanding and enhancing program impact can add considerable value to I-O-based interventions. Unfortunately, few of us today seem to be aware of this outstanding tool that can build a clear link from I-O psychology-based programs to the strategic direction of organizations with which we work. The good news is that many of the techniques for doing this work are already in our skill set, and any we are missing are being developed elsewhere. Measurement theory, statistical techniques, and research design are all core elements of any I-O psychology degree and are critical to Strategic Evaluation. The other good news is that there is a whole body of literature out there that discusses evaluation-specific methodology and implementation issues when conducting program evaluation which is the premise upon which much of this work is based.

If you would like to hear more about Strategic Evaluation, there is a an e-mail list for sharing ideas and best practices on the topic: http://acad. cgu.edu/archives/evalsiop.html.


April 2001 Table of Contents | TIP Home | SIOP Home