Featured Articles
Jenny Baker
/ Categories: TIP, 591

TIPTopics for Students: Beyond Psychometrics: Building Better Surveys

Andrew Tenbrink, Mallory Smith, Georgia LaMarre, Laura Pineault, and Tyleen Lopez, Wayne State University

The world is overrun with surveys. Today, you can barely walk out the door without hearing the “ping” of your phone asking you to give feedback about something. How clean was this airport bathroom? Did you enjoy your shopping experience today? Fill out this 5-minute survey for your chance to win a $50 gift card! For social science graduate students, especially in I-O psychology where self-report surveys dominate research and practice, collecting behavioral data online is a mainstay. Although we acknowledge recent arguments that survey research is overused (Einola & Alvesson, 2020; Lonati et al., 2018), it seems that surveys are not going anywhere any time soon, especially as the events of the past year have forced more socially distanced and online data collections (Wood, 2021).

Although some worry about the potential negative effects of survey overabundance, we instead choose to frame the phenomenon of survey proliferation as a fantastic opportunity for the field of I-O psychology. It is time for us to put more thought into how we build surveys and ask some important and somewhat neglected questions. For example: If surveys are becoming more and more common, what can we do to stand out? If participants are becoming increasingly hesitant to participate in surveys and disinterested when they do, how can we encourage and motivate them? As future academics and practitioners, we should be thinking seriously about how we can answer these questions. If surveys continue to be a focal research methodology, we should strive for them to be great.

Why Should Graduate Students Care?

In our experience, I-O graduate students receive far more training on psychometrics and content development than on actual survey creation and implementation. This can leave students feeling like they’re stumbling around in the dark when it comes to navigating the complexities of survey software, design elements, and user experiences. This results in a survey curating process that is far more focused on what goes into a survey than how it looks and functions. We argue that a focus on both content and construction is essential for the advancement of our science, not only for cosmetic reasons, but because survey design has real implications for the quality of the data we collect (see interventions listed in GESIS’ survey methods evidence map). More broadly, training around survey construction fits into the agenda for graduate students to prioritize their technological self-efficacy and computer science skills to compete in the modern realm of research (e.g., Cornelius et al., 2020).

What Can You Expect From This Article?

The purpose of this article is to shed light on aspects of survey design and construction that we feel have been neglected and undervalued in both I-O training and research. As the future of survey research is trending to be more frequent and more online, we call on I-O graduate students to incorporate these evidence-based practices into their survey design process. To achieve these goals, we break down our discussion of survey design into four broad sections: (a) user experience, (b) careless responding, (c) fraud detection, and (d) administration logistics.

A. User Experience

The easiest aspect of a survey to neglect is how participants perceive the look and feel of the survey. Publishing outlets provide strong incentives for us to select, document, and justify the content of our survey stimuli (e.g., valid measures). However, we are held less accountable, if at all, to how that content is presented. Whether we realize it or not, survey design decisions impact the nature and quality of participant responses (Callegaro et al., 2015, p. 98). Therefore, we should take the necessary steps to leverage the benefits of careful survey design in ways that will facilitate a more positive experience for users.

Thinking about the participant.

Unless you’ve done applied work focusing on product design, you may not be familiar with the acronym “UX.” UX refers to user experience, which is a term to describe the look and feel of a product, solution, or in our context, surveys. In a recent SIOP panel discussion titled “UX for I-O: Key Principles for Employee User Experience in I-O Tools,” Muriel Clauson, Dr. Karl Kuhnert, Dr. Young-Jae Kim, and Rumaisa Mighal discussed techniques for being more user focused when creating surveys and other tools. Among the basic principles of UX that they discussed, “focusing on real users,” “reducing cognitive load,” and “making products accessible” stand out as particularly relevant for survey design in I-O psychology. Simply put, UX involves developing an understanding of users and leveraging that knowledge when making design decisions. A great first step toward creating a positive user experience is to set aside time during survey development to think about who will be participating and how you can better craft your survey to meet their needs.

Attitudes toward surveys.

A substantial proportion of respondents are known to prematurely quit or invest little effort when responding to long, meandering, and seemingly redundant surveys (Callegaro et al., 2015, p. 101). Research suggests that making surveys more enjoyable can lead to positive outcomes for respondents and researchers. When creating a measure of survey attitudes (e.g., “I enjoy filling out surveys”), survey enjoyment was found to influence participant behavior including item-response rates, following directions, timeliness of survey completion, and willingness to participate in additional survey research (Rogelberg et al., 2001). For those conducting web surveys, these findings should serve as a strong incentive to make surveys more enjoyable, not only to benefit participants but also to facilitate the collection of high-quality data. Beyond the effect that positive attitudes have on our ability to reach our goals as researchers, Rogelberg and colleagues also point out that improving participant attitudes towards surveys is an ethical imperative. Because of heavy reliance on surveys in conducting human research, “it is our responsibility to work to improve attitudes towards surveys” (p. 5).

Unfortunately, we do not have a detailed understanding of how to make surveys more enjoyable. Thompson and Surface (2009) provide some guidance in this respect, finding that participants rated surveys as more useful when they were provided with feedback regarding their responses. Although informative, providing personalized feedback is only one UX design element, and we are left questioning: What other design elements are associated with a positive user experience? How does positive survey experience predict the propensity to participate in future surveys? How malleable are global attitudes about surveys; can a single “poor” experience deter prolonged non-response to future survey invitations; can a single “amazing” survey experience restore positive global attitudes about surveys? Conducting future research in these areas will enable researchers and practitioners to take a more intentional approach to building surveys that participants enjoy.

Survey length.

We all know the frustration of investing upward of 30 precious minutes to complete a so-called “short” survey. In addition to causing boredom for participants, survey length can also impact data quality. As participants become exhausted or cognitively drained, they are likely to begin responding differently to the stimuli presented to them (e.g., Tourangeau, 2017). This can contribute to an overall negative survey experience and is counterproductive to the goals of the researcher. Gibson and Bowling (2019) found support for this hypothesis in online research where longer surveys had a higher incidence rate of careless responses than shorter ones. In a related study, careless responding rates increased exponentially as the number of items increased (Bowling et al., 2020). For example, they estimated that for an online survey with 117 items, careless responding would occur 10% of the time, compared to just 1% of the time for an online survey with 33 items.

Although some have come to accept long surveys as a necessary evil, there are things we can actively do to shorten surveys. One straightforward solution is to explore the use of abbreviated measures or single-item indicators (Furnham, 2008). Initial work also speaks to the potential benefits of splitting long questionnaires into shorter subquestionnaires (Andreadis & Kartsounidou, 2020). Beyond directly reducing length, other techniques such as warnings and in-person proctoring show promise in mitigating the negative effects of survey length (Bowling et al., 2020). Overall, taking steps to streamline survey content is a practical strategy to enhance the quantity and quality of surveys returned.

Mobile surveys.

Researchers and practitioners are beginning to transition traditional desktop surveys to function in mobile environments, and for good reason. A report conducted by the Pew Research Center states that 85% of U.S. adults own a smartphone, with 15% of U.S. adults being “smartphone-only” Internet users. These numbers increase when we look at younger individuals, who are commonly targeted for research in the social sciences (e.g., undergraduates). For individuals aged 18–29 in the U.S., 96% own a smartphone, and 28% are “smartphone-only” Internet users. Given that so many Americans rely on a mobile device for Internet access, it is important to design surveys for optimal functioning in these environments.

A great first step for researchers is to make sure that surveys are mobile optimized, meaning surveys can be easily accessed on mobile devices without seeing differences or disruptions in the content compared to desktop access. As we become more experienced with mobile surveys, researchers will be encouraged to create mobile-first surveys (Grelle & Gutierrez, 2019), developed specifically for completion on mobile devices, and adapt existing assessments for a more user-friendly experience on mobile devices. For example, Weidner and Landers (2019) adapted traditional personality measures to be used with swipe-based responses, commonly utilized by popular dating apps.  

B. Careless Responding

One notorious validity threat when using Internet-based surveys is careless responding (CR). CR occurs when participants, regardless of their intention, respond to surveys in a manner that does not reflect their true scores (e.g., Meade & Craig, 2012). CR can distort results and weaken conclusions via psychometric problems (e.g., Arias et al., 2020). It is advised that researchers take action to detect and prevent it, which can be achieved through intentional survey design.

Preventing CR through living surveys and immediate, personalized feedback.

One emerging CR prevention mechanism that holds significant promise is creating a living survey, which exploits the interactive capability of web surveys. In living surveys, respondents receive an immediate, personalized pop-up notification or prompt to verify their response when they incorrectly respond to a quality-check item, provide a nonsensical open-ended response, or answer a question on a page in an unrealistic amount of time. Preliminary evidence suggests this strategy effectively reduces CR behaviors such as speeding, straightlining, and inaccuracy (e.g., Conrad et al., 2017). Living surveys have the potential to evoke compliance through various social power strategies (Gibson, 2019) despite the absence of a human proctor. Conrad et al. (2017) demonstrated the viability of this approach via a series of experiments. Participants who responded faster than a realistic response time threshold on any given item were immediately shown a message encouraging them to answer carefully and take their time. This intervention reduced speeding and straightlining following the prompt and increased response accuracy on a later simple arithmetic question (e.g., Zhang & Conrad, 2016). It is worth noting that the living survey approach runs the risk of producing socially desirable bias in survey answers, as respondents feel monitored by an “artificially humanized” interaction with the survey system (Zhang & Conrad, 2016; Conrad et al., 2017).

We argue the benefits of immediate feedback in surveys outweigh the risks, serving as a valuable intervention for establishing data quality. Even respondents who are genuinely motivated to provide accurate data may accidentally provide nonsensical responses (e.g., mistakenly entering 99 instead of 9). There are mechanisms to create a survey experience that allows respondents, careless or not, the opportunity to correct or reconsider their response. Take for example, a question from the Pittsburgh Sleep Quality Index (PSQI) that asks participants to indicate how many hours they slept each night over the past week. Beyond setting the validation for this question to be a numeric response with 1 to 2 digits, you can go one step further to ascertain the quality and accuracy of your data by creating an “oops” loop within your survey notifying participants that their response exceeds a realistic number of hours and asking them to verify their response (see demonstration here by entering 16 hours per night).

Detecting careless responding.

Despite the advantages of building interactive web surveys, it is not common practice for researchers to prevent CR using the interventions described above. Instead, researchers seem to prefer more passive approaches of detecting CR. For example, researchers include quality check items, strongly worded instructions, or embedded data fields to capture paradata such as response times per page, total response times, mouse movements, and IP addresses (e.g., DeSimone & Harms, 2018; Huang et al., 2012; Niessen et al., 2016). Researchers then use these indices and analytical tools (e.g., careless package in R; Yentes, 2021) to identify and screen for CR.

From our collective experience creating and taking surveys for psychological research, the most common method to detect careless respondents is quality check items that come in the form of either bogus items (e.g., ‘‘I am paid biweekly by leprechauns”) or instructed-response items (e.g., “Please select the circle under ‘neutral’”). Although using a validated bogus items scale (e.g., Hargittai, 2008; Huang et al., 2014; Meade & Craig, 2012) in its entirety can be appealing—for their novelty and humor factor—doing so can come with the risk of producing false-positive careless responders (Curran & Hauser, 2019). For example, Curran and Hauser (2019) found that someone responded yes to the bogus item “I eat cement” because they remembered there was cement in their braces and thought they must have eaten that. Considering these disadvantages, we instead encourage selectively using (a) simple known truth items (e.g., “Trees are a source of wood”) or (b) instructed-response items as quality check items.

In-depth discussion of CR indices and methods for addressing CR once detected go beyond the scope of this article. Interested readers may consult Arthur et al.’s (2021) coverage of this topic, along with others (e.g., Curran, 2016; Denison & Wiernik, 2020). Of note, the treatment of CR is debated in the literature, with some arguing for their removal and others not (see Porter et al., 2019).

C. Fraud Detection

With online surveys comes limited and more distant interaction with participants. This reality creates opportunities for unwanted participants to gain access to your survey. Luckily, there are design features that you can implement to fight against these intrusions and ensure that you are only analyzing responses that are relevant for your purposes.

Verifying the identities of respondents.

Bernerth et al. (2021) argue data-cleaning and quality-checking methodologies currently in use (e.g., instructed-response items, similarly worded items, and time spent on the study) may be insufficient means of detecting problematic web-based respondents. Even if participants respond carefully, this does not guarantee that they are part of the target sample, which limits the utility of their high-quality responses. Particularly if compensation is offered as an incentive, unsolicited respondents may gain access and make every effort to “pass” the necessary check boxes to receive compensation (Hauser et al., 2019). To overcome the limitations of solely relying on CR indicators, they propose a novel approach of using information obtained from Internet protocol (IP) addresses to detect web-based participants that may need to be excluded from a study due to false identities (see Figure 1, Bernerth et al., 2021). 

Bots and fraudulent data.

Our web surveys are increasingly preyed upon by web bots, whether data are sourced from social media, MTurk, Prolific, or other crowdsourcing or panel sources. Aguinis et al. (2020a, b) offer recommendations on how to thwart web robots from infiltrating your web surveys, including having respondents complete an informed consent form with a “CAPTCHA” verification or using a separate intake survey to prescreen for inclusion criteria prior to distributing your primary research survey.

D. Administration Logistics

Once survey construction has been completed, researchers are still faced with administering their survey to the target population. Much like the rest of the survey design process, steps can be taken to ensure administration maximizes the likelihood that a representative sample of participants complete a survey within a theoretically meaningful timeframe.

Use automation to save time, redirect energy, and advance goals.

Demerouti (2020) calls on us to “turn digitalization and automation into a job resource.” One salient way to do this in your research process is by taking advantage of the automation functionalities in your survey software. These functions not only save you time, but they remove human error that may threaten the rigor, reproducibility, and validity of your research findings. For example, you can set automatic reminders for participants to take the survey. Reminders are most effective when sent 3 or 4 days after the initial invitation (Callegaro et al., 2015, p. 152). If you are conducting a longitudinal survey, consider collecting respondents’ email addresses and creating an action to send them a personalized invitation to participate in the subsequent survey(s) at the exact time interval prescribed by your research design.

Timing matters.

The timing of soliciting survey respondents has an impact on response rates, with general advice being to invite potential participants when they are “not too busy” (Callegaro et al., 2015, p. 152). As organizational scholars, there are additional factors to consider when timing the release date of your survey given known fluctuations in employee attitudes, behaviors, and cognitions across days of the week (e.g., Pindek et al., 2020). Intentionally distributing surveys on a certain date and fielding for set duration (e.g., 5 days) increases the likelihood that respondents will have a common frame of referent when responding. For example, if you send out a survey on a Sunday to employees working a “traditional” schedule (9 to 5, M through F), are questions’ asking about “today” going to elicit the same response as an employee answering these same questions on a Monday?

Other aspects of a respondent’s context or situation may introduce measurement error. Respondents who indicate their employment status as “Employed full-time (30 hours or more per week)” may not have worked a conventional schedule the week or month prior at the time of survey. In our own personal research during COVID-19, we have paid particular attention to this possibility given the precarity of employment and work hours (Collins et al., 2020). We made attempts to detect fluctuations in respondents work or life circumstances through targeted questions like, “What was the main reason you were absent from work LAST WEEK?” as they relate to the recall window used to frame psychological measures (e.g., “In the past week” vs. “In the past month”). This decision mirrors an argument for more widespread inclusion and consideration of major life events in I-O survey research, which despite being low base rate events (Bakker et al., 2019), can systematically distort or explain meaningful variability in responses.

Conclusion

We ought to invest time, energy, and effort into carefully crafting surveys that respondents enjoy. As we have established, intentional survey design is beneficial for researchers, practitioners, and participants. The process of turning a web survey from an idea into a deliverable product involves several hundred decisions. In this article, we did not provide a comprehensive road map to guide each of these decisions, knowing that they are driven in part by budget, use case, and software available. Instead, we called attention to four neglected areas of survey design, (a) user experience, (b) careless responding, (c) fraud detection, and (d) administration logistics, as they hold tremendous promise to move I-O web surveys from good to great.

Many scholars driving research on the science of web surveys are housed in non-I-O programs (e.g., the University of Maryland’s Joint Program in Survey Methodology, Michigan Program in Survey and Data Science). For thorough reviews of methodological research on web surveys, we recommend adding “The Science of Web Surveys” to your library, bookmarking available resources on evidence-based survey methodology from GESIS, and expanding the academic journals on your watch lists (Field Methods; Social Science Computer Review; Survey Research Methods; Public Opinion Quarterly). Implementing evidence-based survey design principles holds promise for improving the quality, reach, and replicability of our science. To reap these benefits, we must actively consume web survey methodology research published outside of our typical journals, commit to implementing the evidenced-based practices documented in those texts, and be incentivized to continue doing so through our academic publishing processes.

 




Andrew Tenbrink is a 5th-year PhD student in I-O psychology. He received his BS in Psychology from Kansas State University. His research interests include selection, assessment, and performance management, with a specific focus on factors affecting the performance appraisal process. Currently, Andrew has a 1-year assistantship working as a quantitative methods consultant in the Department of Psychology’s Research Design and Analysis Unit at Wayne State University. Andrew is expected to graduate in the summer of 2021. After earning his PhD, he would like to pursue a career in academia. andrewtenbrink@wayne.edu | @AndrewPTenbrink

Mallory Smith completed her Master of Arts in I-O Psychology in the spring of 2020. Prior to graduate school, she earned her BA in Psychology and German from Wayne State University. Her interests include factors influencing employee attitudes, efficacy, and perceptions of justice during organizational change. After graduation, Mallory started a new job in the healthcare industry, leveraging both her I-O skillset and background in information technology to support digital transformation, enhance work processes, and encourage employee adoption of new innovations. smithy@wayne.edu | @mallorycsmith 

Georgia LaMarre is a 4th-year PhD student in I-O psychology. She completed her undergraduate education at the University of Waterloo before moving over the border to live in Michigan. Georgia is currently working as an organizational development intern at a consulting firm while pursuing research interests in team decision making, workplace identity, and paramilitary organizational culture. After graduate school, she hopes to apply her I-O knowledge to help solve problems in public-sector organizations. georgia.lamarre@wayne.edu

Laura Pineault is a 5th-year PhD candidate in I-O psychology. Her research interests lie at the intersection of leadership and work–life organizational culture, with emphasis on the impact of work–life organizational practices on the leadership success of women. Laura graduated with Distinction from the Honours Behaviour, Cognition and Neuroscience program at the University of Windsor in June 2016. Currently, Laura serves as the primary graduate research assistant for a NSF RAPID grant (Work, Family, and Social Well-Being Among Couples in the Context of COVID-19; NSF #2031726) and is a quantitative methods consultant for the Department of Psychology’s Research Design and Analysis Unit at Wayne State University. Laura is expected to graduate in the spring of 2021. laura.pineault@wayne.edu | @LPineault

Tyleen Lopez is a 3rd-year PhD student in I-O psychology. She received her BA in Psychology from St. John’s University in Queens, New York. Her research interests include diversity/inclusion, leadership, and well-being in the workplace. Tyleen is currently a graduate research assistant and lab manager for Dr. Lars U. Johnson’s LeadWell Research lab at Wayne State University. Tyleen is expected to graduate in the spring of 2023. After earning her PhD, she would like to pursue a career in academia. tyleen.lopez@wayne.edu | @tyleenlopez

Molly Christophersen is pursuing a Master of Arts in I-O Psychology. She earned her BA in Sociology from Michigan State University in 2016. Her interests include workforce training and employee development. After graduate school, she has her sights set on an applied career in the private sector—ideally in a role where she can help businesses train and develop their employees, effectively helping individuals to grow within their organization. mollychristophersen@wayne.edu | @molly_kate32

Print
3220 Rate this article:
3.0
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.