Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

Taming the Cyber Tooth Tiger: Our Technology Is Good, but Our Science Is Better

Dale S. Rose, Andrew English, and Christine Thomas
3D Group

Correspondence sent to Dale S. Rose, 2000 Powell, Suite 970, Emeryville CA 94608; drose@3dgroup.net.

Dropped calls. Corrupted files. Spam. Frozen computer screens. Incompatible file formats. Mandatory upgrades to software that works great as is.

If you ever find yourself wondering whether all this impressive technology is worth the fuss, it might help to reflect on an exhibit at SIOP’s 20th anniversary annual conference in 2005. Tacked to a wall near the poster sessions was a complete factor analysis computed entirely by hand from sometime in the 1960s. It took up a space about 10 feet wide by 15 feet tall. We couldn’t tell you what the topic was, but the sheer volume of paper with handwritten tables and text was impressive. Boy, those were the days, right?

Advancements in factor analysis computations illustrate one way in which technology really has made things a whole lot easier. But wait...how many of you have ever read a factor analysis that was done incorrectly or just didn’t make sense? We didn’t look carefully back in 2005, but it is a good bet that the factor analysis on the wall at that conference was well thought out BEFORE the analysis began. Although technology has definitely made our lives as I-O psychologists much easier, we do need to be careful not to let the glamour and the hype overshadow the true value we add as scientists and practitioners.

This may sound like sacrilege to all you iPhone, iPad, Blackberry, Facebook obsessed, technology-loving bloggers, but we would like to introduce a simple premise: When it comes to changing behavior in the workplace, the science behind I-O psychology is still king. Sure, we can automate a factor analysis so that it gets done faster. Certainly this increase in speed allows us to do more analyses, which then gives us the potential to speed up the net acquisition of knowledge in our field. But...how much better are these speedy analyses in terms of helping us to understand and predict workplace behavior? How well grounded in science are these click-and-go analyses?

As much as technology can make our work easier, I-O psychologists need to be careful not to let the tail wag the tiger.

As I-O psychologists we make a difference because of our deep knowledge of behavioral science. Our value comes from knowing what works best in the workplace not just what works fastest. Take leadership development for example. On one hand, the proliferation of online learning tools at your fingertips makes it easy to show nifty videos to leaders and link them to interesting articles. On the other hand, basic tenets of changing leader behavior remain unchanged. If you don’t get leaders to commit to something specific, to value changing their behavior, and hold them accountable for measurable change then they won’t change! In other words, goal setting is “SMART” because the research proves it is effective, not because it is convenient to access on the Internet.

Impact of Technology on 360-Degree Feedback

Rather than just rage against the machine, it might be more useful to give some detailed examples of areas where technology really has the potential to run amok. Although we could discuss a broad range of specialties across I-O psychology, we will instead focus on 360-degree feedback because it is what we live and breathe at 3D Group. We run into this “cyber tooth” tiger on a daily basis, and we can provide richer detail on 360-degree feedback than on some other specialties.

360-Degree Feedback in a Nutshell
By soliciting observations from multiple raters, 360-degree feedback allows an individual (usually leaders) to understand how their behavior is perceived by others internal or external to their organization. Feedback from the varied sources provides unique insight into the leadership behaviors desired by an organization to meet its vision, mission, and goals. Under the right circumstances, research has demonstrated that 360-degree feedback can be a very effective tool for changing leadership behaviors (Bracken & Rose, in press; Church, Walker & Brockner, 2002).

The typical 360-degree feedback process involves multiple design decisions, which we will use to illustrate the myriad ways in which technology can be fantastic and/or catastrophic for changing behavior.

Phase 1: Select Survey Content
Technology has provided us the ability to choose among many competency libraries and survey-item banks for quickly customizing a survey for any organization or job. Literally, a custom survey can now be created in less than 10 minutes. It can be a little bit like shopping at Amazon.com: It’s as easy as click and buy, click and buy. Unfortunately, the technology itself cannot provide evidence for the content validity of those survey items (“But they looked so good in my shopping cart!”).

For example, let’s look at the competency communication. Communication is multidimensional and comprises various behaviors such as listening, speaking, writing, and presentation skills. You could easily select survey items for only the speaking subdimension, and suddenly we are not truly measuring the construct of communication. And what about job relevance? For some jobs, public presentation might be critical and for others not relevant at all. The competencies and behaviors a survey measures should be carefully chosen through a systematic analysis of the organization’s needs and/or the job in question. I-O psychologists have known this for decades, and clearly technology cannot make these decisions (nor has it improved our ability to make this decision wisely).

There are other issues beyond the actual survey content itself. Although many might believe that survey rating scales are “six of one, half dozen of the other,” research tells us otherwise. Different contexts call for different rating scales, as the type of ratings scale you select will impact 360-degree survey results (English, Rose, & McLellan, 2009). Although survey software can list dozens of options for the rating scale (“step right up, choose your scale, any scale will do”), a computer simply can’t decide which rating scale is best for a particular context or which rating scale will result in the greatest accuracy.
 
Phase 2: Rater Selection
The second phase typically consists of selecting the raters who will provide feedback for each 360 participant. In the 1980s, this task was an administrative nightmare. The participants had to distribute paper surveys themselves (via the postal service). In the 1990s, we graduated to mailing around floppy disks, and now, e-mail has made things much less messy (well, almost, there is that SPAM-filter thing). Although technology can expedite rater selection immensely, it can’t stand in for the human judgment that was at the core of the original method of paper-survey distribution. What technology can’t do is determine how suited a particular individual might be for providing useful ratings for a 360 participant. The database doesn’t know that Jane Doe spends most of her time in cross-functional teams where peer feedback would be much more helpful than simply selecting her formal peers from the organizational chart. It is critical in the 360 process to select a wide array of raters who are most familiar with the participant’s performance at work, so including only those people on the formal hierarchy may omit essential feedback for the leader. By excluding critical raters, the leader receiving feedback will find their data less credible and relevant, and will therefore be less motivated to use the information to guide change.

Phase 3: Survey Completion
Obviously, the advent of online surveys has greatly increased the efficiency of the survey completion phase for both participants and their raters. Both the completion and submission of survey responses require less time of raters and are more convenient. If asked to provide ratings for multiple individuals, raters can log into a dashboard where they can view a record of all their activities and keep track of how many surveys they have completed. The biggest problem we see with technology in this phase is the latest trend that moves beyond automating the delivery of the feedback and automates the actual feedback itself. Many systems now incorporate feedback wizards to “assist raters in providing feedback.” Yep, you read that right! Now raters don’t even need to write their own comments. They can quickly select a comment from a generic library and then leave it for the participant as their own feedback. You are the professionals here, so you tell us: Is it more motivating to read a “comment” that you know was just a multiple choice option, or is it more motivating to know that your coworker actually wrote the words “you are a rock star of outstanding customer service”? In addition, isn’t the point of 360-degree feedback to gather a wide array of perspectives on an individual’s job performance? What happens to a leader’s motivation when two (or three, or six) people pick the same comment?

Another example is a technology used by some software firms that allows raters to provide feedback to multiple participants at the same time item by item (e.g., I provide feedback to Lisa, Richard, and Arthur on Item #1 before moving on to Item #2). Though this may make things easier and faster for raters, there is no clear understanding for how this might affect the measurement characteristics of the survey. For a more detailed discussion of this technology option in 360-degree feedback survey completion, check out David Bracken’s blog (Bracken, 2010).

Phase 4: Report Production
Clearly, technology has been a major factor in reporting. The data aggregation and computation of scores can be completely automated now. No need for a calculator (Does anyone still own a calculator?) or complicated Excel sheets anymore. Database software has made it possible with the click of a button to produce a beautifully formatted, highly accurate feedback report. 

Unfortunately, in some cases, technology has encouraged what we refer to as “analysis on steroids” (forgive this metaphor, but we are in the Bay Area where stories about sports and steroids have been all too familiar recently). There is an endless array of bells and whistles available today to customize your final reports. We’ve seen so many “data rich” methods for presenting data we can’t keep count. Like overly rich food (Bay Area still…), too much of a good thing often doesn’t sit well. One of our favorite examples is a report that exceeds 100 pages! Now we know that most I-O psychologists eat this kind of thing up, but remember the typical leader does not spend his/her evenings perusing the works of Edward Tufte! The 360-degree feedback experience can be daunting for even the most confident leader. Imagine: You have data coming to you from everyone who knows you at work, and they can say anything they choose. It is easy to feel overwhelmed even before you open your report. Presenting a feedback report that consists of four different types of graphs, a legend with six options, multiple-rater and score distribution tables, and is 100 pages long is no way to help a leader become self-aware so they can change their behavior. 360-degree feedback data should be presented in an easy to interpret and meaningful manner that helps the leader accept the results. This is critical to ensuring an effective 360 program. So although technology enables us to quickly slice and dice the data to infinity with complete accuracy and precision, we must remember the purpose is to help leaders gain insight about their behavior, not impress them with our ability to analyze data. Whereas a programmer can generate zillions of fancy charts, graphs, and analyses, it takes a professional to know what data will best help a leader to change.

Phase 5: Feedback Delivery
Technology can make 360-degree feedback reports accessible within minutes of their completion. Again, this automation is great from an efficiency stand point, but let’s not allow that capability to drive the process. The timing for releasing final reports to each participant is an important decision point that deserves careful consideration. If participants are scheduled to receive coaching 2 weeks after they complete their survey, it might be detrimental to release their reports 2 weeks beforehand. Why? Well, when a leader reads that they are in the bottom 20th percentile compared to their peers (or worse, a national sample), they might actually need some help trying to deal with it. It might be a good idea to have an expert available a couple days after they get their report, and unless you have hundreds of feedback facilitators sitting by the phone waiting for a call, you will need to schedule report delivery when the feedback facilitators are available.

In addition, to whom the reports should be delivered is an important consideration. Will human resources have access to the reports? Will the reports go to the manager? Gee, we could just e-mail them to all the direct reports at the same time they go to the leader (this is a great example of something a programmer might recommend that is so obviously flawed from our perspective it is hard to even imagine someone suggesting it).

Phase 6: Developmental Resources (Postfeedback Delivery)
Although we are always happy to hear that postfeedback developmental resources are being considered under any context, technology has even affected this phase of the 360 process. On-demand talent management and leadership development tools are now available offering a wealth of resources for improving one’s performance. Although these tools can be beneficial, there are considerations that technology cannot address. One of the biggest issues is accountability. It is one thing to provide feedback recipients with access to online leadership development resources, but it is entirely different to build in accountability to ensure that they actually use the tools. A leader’s boss needs to be involved to provide timely feedback on how and if behaviors are changing as planned. Although it’s critical to get the individuals to actually use the online development tools in the first place, it is even more critical that they have some accountability for using the tools to create and sustain change.

Other Examples

The application of technology in I-O psychology can be seen across many areas beyond 360-degree feedback. A second example of how technology has reshaped our field is in testing and assessment. Technology has helped us in leaps and bounds here. Most assessment providers now offer equally sophisticated platforms for handling test delivery, data collection, scoring, and online reporting. While on the one hand it might appear that I-O knowledge is becoming embedded into these technology systems, there are trade offs to consider. For instance online assessment platforms have made test security more complex. Not only are we concerned about test items being made public, but how can we be certain that the individual on the other end of the computer is actually the individual they say they are?

Technology has also radically changed implementation of employee surveys (satisfaction, opinion, engagement, etc.). Before the Internet, employees would be required to gather in large rooms at scheduled times during the work day, and we would hand out paper surveys for everyone. When finished, each employee dropped their survey in a large box that was taken off site by the outside firm for tabulating. We frequently saw response rates in the 90% range under this type of administration. Online survey administration has certainly sped up this process, but we now have to deal with lower response rates and less trust in the confidentiality of the process. Trust becomes a bigger issue with online administration because instead of tossing their unidentifiable surveys into a large box with hundreds of others, employees now get an e-mail directly. This is more convenient because they can complete the survey at 3 a.m. Sunday night (a shocking number of employees do this, by the way), but they may not feel quite as anonymous as they once did. This concern further highlights the importance of our profession. As professionals we are bound by our ethics not to divulge confidential data. In contrast, the software firms and programmers that design flashy widgets and survey gizmos are far more agnostic with respect to protecting survey data.

Another example is how technology has shaped public opinion polling. Person-to-person interviews and phone polling are becoming more obsolete. We’ve learned that technology has enabled us to collect and compile public opinion data more quickly and at a fraction of the cost of traditional telephone surveys. However, we know that the individuals who volunteer to participate in online polls sometimes have very different attitudes than the general public (The Pew Research Center, 1999).

Taking all of these examples together, it seems clear that whereas technology does open up many options, the true value I-O psychologists bring to organizations is our ability to choose among those options wisely based on our science. Although we “could” produce that 360 report with every bell and whistle available, “should” we?

Conclusion

Let’s remember the value we bring to organizations as I-O psychologists is grounded in science, not technology: We mustn’t let the tail wag the tiger. Technology knows nothing about theories of motivation, job satisfaction, leadership, or personnel selection. No amount of technology can design a job to be more intrinsically rewarding. And although technology provides immensely helpful tools for documenting and tracking our goals, it won’t help us determine what types of goals to set for ourselves or our organization. Of course, when SPSS releases its first iPad application for factor analysis, we’ll be the first to download it!

References

     Bracken, D. W. (2010, October 25). Another angle on 360: Silly survey formats? [Weblog post]. Retrieved from http://dwbracken.wordpress.com/2010/10/25/silly-survey-formats/#comment-37
     Bracken, D. W. & Rose, D. S. (in press). When does 360-degree feedback create behavior change? And how would we know it when it does? Journal of Business and Psychology.
     Church, A. H., Walker, A. G. & Brockner, J. (2002) Multisource feedback for organization development and change (pp. 27–51). In Church, A. H. & Waclawski, J. (Eds.), Organization development: A data-driven approach to organizational change. San Francisco: Jossey-Bass.
     English, A., Rose, D.S. & McLellan, J. (2009, April). Rating scale label effects on leniency bias in 360-degree feedback. Paper presented at the 24th Annual Conference of the Society for Industrial Organizational Psychology. New Orleans, LA.
     The Pew Research Center for the People and the Press (1999, January 27). Online polling offer mixed results. Retrieved from
http://people-press.org/commentary/?analysisid=20