Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

 

2018 Master Tutorials

The 15 Master Tutorials are sponsored by the Society for Industrial and Organizational Psychology, Inc. (SIOP) and presented as part of the 33rd Annual Conference.

These sessions are designed to appeal to practitioners and academics at a post-graduate level. There is no additional cost to attend any Master Tutorial beyond the cost of basic conference registration. There are no known conflicts of interest or commercial support regarding these sessions and their presenters.

SIOP is approved by the American Psychological Association to sponsor continuing education for psychologists. SIOP maintains responsibility for this program and its content.


Machine Learning in an I-O’s World: Putting Algorithms in Context

Thursday, April 19, 2018
10:30 AM - 11:50 AM
Sheraton 4

Presenters:

Dan J. Putka, Human Resources Research Organization
Tanner Bateman, FTI Consulting
Samantha Holland, DCI Consulting Group
Jennifer P. Green, George Mason University / Human Resources Research Organization

Abstract:

We take a deep dive into modern Machine Learning techniques from a conceptual standpoint, bridging standard statistical techniques familiar to I-O psychologists, HR practitioners, and the “new” frontier. The relative merits of different approaches and a real-world case study are discussed.

Full Description:

Big Data and its associated techniques are topics often mentioned, but rarely explained within a framework easily digested by classically trained I-O and HR professionals. In this session, we take a deep dive into modern Machine Learning techniques from a conceptual standpoint, bridging the space between standard statistical techniques we are trained on and the “new” frontier. The relative merits of different approaches are discussed with specific focus on those techniques of clearest use to the “small” data context (i.e., data most often available to I-Os). A real-world case study will solidify a conceptual understanding of these techniques. 

Learning Objectives:

  • Describe a functional framework for organizing prediction methods, both traditional and modern.
  • Compare the advantages and limitations of traditional prediction methods in the presence of certain modeling context features from the functional framework.
  • Identify the advantages of modern ML prediction methods in the presence of certain modeling context features from the functional framework.
  • List out best practical resources for learning more about modern prediction methods.

Presenter Biographies:

Dan J. Putka is a Principal Staff Scientist at the Human Resources Research Organization. Complementing his client work, Dan is active in the I-O scientific community, focusing on advancing psychometric and analytic methods sensitive to the demands of applied research and practice. Dan is a fellow of APA and three of its divisions to include the Society for Industrial and Organizational Psychology (SIOP; Division 14), APA’s Quantitative and Qualitative Methods Division (Division 5), and the Society for Military Psychology (Division 19). Dan holds a Ph.D. in I-O Psychology with a specialization in Quantitative Methods from Ohio University.

Tanner Bateman (Ph.D., Industrial-Organizational Psychology, Virginia Tech) is Senior Analyst -Global Talent Analytics at FTI Consulting. Primary responsibilities include identifying, analyzing, and reporting trends/themes in FTI’s human capital. Reporting encompasses FTI’s employee engagement survey, 360 ratings, L&D outcomes, and exit survey feedback. Analytic duties include identifying FTI’s human capital trends, predicting FTI’s human capital trends, and using predictive analytics/modeling to inform internal FTI initiatives using a mix of traditional and modern approaches. Tanner has co-authored a chapter in the Handbook of Personality at Work and has presented at several professional conferences including the Society for Industrial and Organizational Psychology (SIOP).

Samantha Holland, Ph.D. is a Senior Consultant at DCI Consulting Group whose work centers on employee selection best practices, employment law litigation support, and proactive risk assessment within organizations. Through her work, she actively contrasts various analytic methods to identify the appropriate approach for a given context. Samantha has published in multiple journals, including Organizational Research Method and Human Performance, in addition to being a regular contributor to professional conferences. She received Ph.D. in Industrial-Organizational Psychology at George Mason University, and her B.A. in Mathematical Methods in the Social Sciences at Northwestern University.

Jennifer P. Green, M.A. is a fifth-year doctoral candidate in the industrial and organizational psychology program at George Mason University and a Research Associate at the Human Resources Research Organization. Her research interests include statistical techniques (such as structural equation modeling and factor analysis), research methods (including experience sampling methods and qualitative analysis), leadership (primarily, situation-based leadership), and within-person variation in personality (specifically, personality strength). She served as a student assistant editor for the Journal of Business and Psychology and has published in Organizational Research Methods.


Deep-Learning Introduction and Applications Within I-O Psychology and HR Analytics

Thursday, April 19, 2018
12:00 PM - 1:20 PM
Sheraton 4

Presenters:

Ben Taylor, Ziff.ai / HireVue
Dan J. Putka, Human Resources Research Organization

Abstract:

In this detailed A detailed tutorial, Deep-Learning and its applications to I-O and HR Analytics will be covered including its history, current applications, and future expectations. covering deep-learning, the history, the current applications, and the future expectations. Although the master tutorial is presented by a data scientist with expertise in deep learning, the second part of the master tutorial will help ground the conversation from the perspective of an I-O Psychologist.

Full Description:

Since 2014 deep learning has allowed us to set new benchmarks with image classification, speech recognition, and language comprehension. Previous models, which required human feature discovery and engineering, are being surpassed in performance with newer models built on raw unstructured data. A defining characteristic with deep learning is the shift in the role of feature engineering, where previously it was lead by the human researcher, and now it is discovered within the model itself. Model size, cross validation, adverse impact, and I-O specific applications will be discussed. Full stand-alone Docker containers with code and instructions will be shared in the tutorial as a handout link, along with a video link to coach the attendees on using the content.

Learning Objectives:

  • Describe Deep Learning and how it fits relative to other modeling techniques
  • List the breadth of applications in industry using deep learning and how this technology is a core element of the new AI revolution
  • Explain how deep learning methods are relevant to modeling situations faced by I-O psychologists and HR practitioners
  • Apply deep learning methods to a sample data set

Presenter Biographies:

Dan Putka is a Principal Staff Scientist at the Human Resources Research Organization in Alexandria, Virginia. Over the past 15 years, Dan has helped numerous organizations develop, evaluate, and implement assessments to (a) enhance their hiring and promotion processes, and (b) guide individuals to career and job opportunities that fit them well. Complementing his client-centered work, Dan has maintained an active presence in the Industrial-Organizational psychology scientific community, focusing on advancing psychometric and analytic methods that are sensitive to the demands of applied research and practice.  Along these lines, he has delivered numerous presentations and invited workshops at national conferences, published a multitude of book chapters and articles in top-tier journals, and serves on the editorial board of five scientific journals. Dan is a past-president of the Personnel Testing Council of Metropolitan Washington, and a fellow of APA and three of its divisions to include the Society for Industrial and Organizational Psychology (SIOP; Division 14), APA’s Quantitative and Qualitative Methods Division (Division 5), and the Society for Military Psychology (Division 19).  Dan currently serves on SIOP’s committee to revise the Principles for the Validation and Use of Personnel Selection Procedures, and recently served on the international task force to revise the Guidelines and Ethical Considerations for Assessment Center Operations. Dan holds a Ph.D. in I-O Psychology with a specialization in Quantitative Methods from Ohio University.

Ben Taylor has over 14 years of machine learning experience. He has worked for 5 years in the semiconductor industry for Intel and Micron in photolithography, process control, and yield prediction. He has also worked as a Wall Street “quant” building sentiment stock models for a hedge fund trading the S&P 1500 on the news content. During that time Ben helped build a 600 GPU core computing cluster from the ground up that he used to backtest up to 10M trading scenarios per day. Ben left finance and semiconductor to work for a new HR start-up called HireVue in 2013 and lead their machine learning efforts around digital interviewing. His greatest accomplishment has been developing the features and methods which have allowed short unstructured video recorded interviews to see r-values in the 0.3-0.4 range in the HireVue insights product. He has a M.S. in chemical engineering from the University of Utah where he is currently finishing his Ph.D., also in chemical engineering. Ben co-founded a deep learning startup April 2017 called Ziff and has transitioned full-time building that company. Ben has experience solving models on terabytes of data on hardware that is 100s of terraflops. Ben also has experience with bias removal in advanced modeling techniques, even when it is present in the training set.


Data Wrangling Using R, RStudio, and Python

Thursday, April 19, 2018
1:30 PM - 2:50 PM
Sheraton 4

Presenters:

Gina M. Bufton, Georgia Institute of Technology
Frederick R. Stilson, TalentQuest

Abstract:

This interactive session will serve as an introduction to the process of data wrangling using R followed by a repeat of the procedures in Python to serve as a stepping-stone for those interested in learning both languages. If you plan on following along, please bring a laptop computer with R, RStudio, and Anaconda already loaded. Files will be available from https://github.com/RobStilson.

Full Description:

This tutorial is an introduction to data wrangling in R and Python. Interviews and expert estimates suggest that data wrangling, the process of getting data into the desired format for analysis, can take anywhere from 50 to 80% of a data analyst’s time at work (Lohr, 2014). As the trend towards Big Data continues, the demand for data wrangling skills will increase. As a result, I-O psychologists and HR practitioners must become familiar with the tools and techniques that will enable them to maximize their efficiency and minimize time spent getting data into the necessary format to perform desired analyses. This session provides a foundation for this concept and teaches some common methods for data wrangling across two popular languages.

Learning Objectives:

  • Describe how to import and explore common file structures for data wrangling using R.
  • Discuss how to change wide format data into tall format data for easier variable manipulation and analyses.
  • Demonstrate and present data wrangling using R and Python procedures in an understandable way for a non-stats savvy audience.

Presenter Biographies:

Gina Bufton is a fourth-year graduate student in the Industrial/Organizational psychology program at Georgia Institute of Technology. She works closely with her advisor, Dr. Howard M. Weiss, in the Work Experience Lab on topics surrounding the subjective experience of work. Her master’s thesis focused on the effects of automated technology on the experience of agency in work, which she defended in April 2017. She has gained extensive experience with data wrangling large corporate datasets from her internships at UPS and TalentQuest. She is particularly adept at multilevel modeling, structural equation modeling, and reporting (in PowerPoint and Excel) in RStudio.

Frederick (Rob) is the Director of Research and Test Development at TalentQuest. He completed his doctorate degree in Industrial/Organizational Psychology at the University of South Florida. After graduate school, he had stints working at UPS on the employee engagement survey and at FurstPerson where he was responsible for creating computer adaptive tests and researching new selection solutions. At TalentQuest, he is responsible for test creation and test updating along with data mining, people analytics, report automation, and internal dashboard creation in R and Python.


Empirical Abduction as a Basis for Discovery and Theory Development

Thursday, April 19, 2018
3:30 PM - 4:50 PM
Sheraton 4

Presenters:

Peter A. Bamberger, Tel Aviv University 
Jennifer Mueller, University of San Diego
Sandra L. Robinson, University of British Columbia
Junqi Shi, Sun Yat-sen University, Guangzhou, China

Abstract:

This interactive session will enhance participants' understanding of abductive reasoning, an important yet largely neglected approach that involves theory development on the basis of empirical exploration. Participants will learn how abduction may be used to identify and offer plausible explanations for new phenomena and relationships, and explore resolutions for discrepant findings or anomalies.

Full Description:

Contemporary I-O research is largely grounded on hypothetic-deductive reasoning. But this has not always been the case. For example, cognitive dissonance theory emerged abductively, with Festinger exploring empirically-derived hunches in order to narrow the range of plausible explanations (Folger & Stein, 2017). In this session, we will explain why such an approach is key to understanding emergent I-O phenomena and generating new theory. After explaining the basics of abduction and its distinction from more conventional approaches, panelists will give examples of how various methods (e.g., experiments, field studies) may be applied abductively to identify and offer plausible explanations for emergent or poorly understood phenomena, and discuss applications to participants' own research.

Learning Objectives:

  • Define abductive reasoning and how it is distinct from the more conventional hypothetic-deductive approach
  • Describe how experimental, survey and meta-analytic designs and data may be used for empirical abduction
  • Explain how empirical abduction may be used as a basis for discovering and characterizing new/emergent phenomena
  • Apply abductive reasoning to further research theory and discuss applications to participants own research.

Presenter Biographies:

Peter A. Bamberger (Ph.D., Organizational Behavior, Cornell University) is Professor of Organizational Behavior at the Coller School of Management of Tel Aviv University and incoming Editor-In-Chief of Academy of Management Discoveries. A member of the Society for Organizational Behavior, and a Fellow of the Society of Industrial and Organizational Psychology, he has authored over 100 refereed journal articles and book chapters and books including Human Resource Strategy (with Ilan Meshulam and Michal Biron, Routledge, 2014). Current research interests include peer relations and helping processes, occupational health psychology, and compensation strategy.

Jennifer Mueller (Ph.D., Social and Developmental Psychology, Brandeis University) is an Associate Professor of Management at the University of San Diego, and incoming Associate Editor of Academy of Management Discoveries. Jennifer has published her work in many top management and psychology journals such as Academy of Management Journal and Psychological Science respectively, and has authored the book, Creative Change: Why we resist it…how to embrace it.  She serves on the editorial boards of Organization Science, Organizational Behavior and Human Decision Processes, and Journal of Applied Psychology. Her current research interests include creativity, implicit theories of creativity and leadership as well as collaboration in teams.

Sandra L. Robinson (Ph.D., Organizational Behavior, Northwestern University) is a Professor and Distinguished Scholar of Organizational Behavior and Human Resources at the University of British Columbia, and an associate editor of the Academy of Management Discoveries. Sandra has won various awards for her research, such as the OB Division’s Cummings Award, the Ascendant Scholar Award and the Distinguished Scholar Award from the Western Academy of Management. She has formerly served eight years in various elected roles for the OB Division at the Academy of Management, including Division Chair. Her current research interests include dark side topics such as territoriality, ostracism, and distrust.

Junqi Shi (Ph.D. Organizational Psychology, Peking University) is a Professor of Organizational Behavior and Human Resource Management at the Lingnan (University) College of Sun Yat-sen University, Guangzhou, China. Dr. Shi has published over 50 referred journal articles in such journals as Academy of Management Journal, Journal of Applied Psychology, Annual Review of Psychology, and Personnel Psychology.  He serves as an associate editor of the Academy of Management Discoveries, an associate editor of Work, Aging, and Retirement, and serves on the editorial board of Journal of Applied Psychology.


Rigor and Relevance With Necessary Condition Analysis (NCA)

Thursday, April 19, 2018
3:30 PM - 4:50 PM
Superior B

Presenters:

Jan Dul, Rotterdam School of Management

Abstract:

NCA understands cause–effect relations as "necessary but not sufficient." This logic differs from conventional additive logic and provides new theoretical and practical insights. NCA puts a ceiling line on the data in XY-scatter plot. The ceiling line represents the level of X that is necessary but insufficient for a given level of Y. The rapidly growing interest and applications in NCA will be the focus of this tutorial.

Full Description:

A necessary but not sufficient condition is a factor that can’t be left out – a “gotta have”. It’s a constraint that does not ensure success if it is present, but guarantees failure if it is missing: a car stops moving if the fuel tank is empty; a brokerage collapses if trust is gone. While our current (regression based) models add determinants to improve predictions about average tendencies – this could happen; this is likely to result – Necessary Condition Analysis (NCA) enables the analyst to look at the data and say, if you don’t do this, you won’t succeed.

Learning Objectives:

  • Explain the differences between necessity logic and traditional additive logic.
  • Describe the relevance of necessary conditions for theory and practice.
  • Design and formulate necessary condition hypotheses in their own research field.
  • Identify/test and apply potential necessary conditions with (own) data sets.

Presenter Biography:

Jan Dul is professor of Technology and Human Factors at the Rotterdam School of Management, Erasmus University, Netherlands. His research focuses on work environments for human performance and well-being. He has a main research and teaching interest in empirical research methodology. He has written more than 140 publications. His work on NCA has recently appeared in Organizational Research Methods, Sociological Methods and Research, Intelligence, Journal of Business Research, and Journal of Purchasing and Supply Management. Jan Dul’s recent international awards are the “Liberty Mutual Award”, the “International Ergonomics Association Distinguished Service Award”, and the “Hal W. Hendrick Distinguished International Colleague Award”. Professor Dul holds a Ph.D. in Biomedical Engineering from Vanderbilt Unviersity.


Natural Language Processing: Using Data Science to Extract Meaning From Text

Friday, April 20, 2018
8:00 AM - 9:20 AM
Sheraton 4

Presenter:

Lindsey Zuloaga, HireVue

Abstract:

This session will dive into Natural Language Processing (NLP) by starting with the basics. Attendees will get an overview of the history of the NLP field and an understanding of the underlying techniques and the justifications behind them. Although a complex topic, this tutorial will be adjusted to optimally benefit a general I-O Psychology audience.

Full Description:

In an increasingly connected and data rich world, Natural Language Processing (NLP) is of great interest for I-O Psychologists and HR practitioners through analyzing open-ended responses in a consistent way and on large scales. Modern techniques allow for the quantitative analysis of spoken and written language by using clever methods to structure unstructured data, then employing Machine Learning algorithms to learn patterns and make predictions. In this tutorial, aimed at a general I-O Psychology and HR practitioner audience, we will walk through various NLP techniques, starting with the most basic and building in complexity. In this journey, we will see how far this field has come since Artificial Intelligence pioneers like Alan Turing first mentioned the possibility of machines understanding and generating human language.

Learning Objectives:

  • Discuss and outline the history of Natural Language Processing
  • Describe simple methods for text-based predictive models
  • Explain the value of Neural Networks, including Word Embeddings and Semantic Clustering

Presenter Biography:

Lindsey Zuloaga (Data Scientist, HireVue) is a Physicist by training and a passionate problem-solver by nature. Lindsey has nearly a decade of experience in academic research focused on nanoscience and its resulting applications in medicine, sensing, and signal processing. She has a B.S. in Physics from the University of Utah, an M.S. and Ph.D. in Applied Physics from Rice University and she worked as a Postdoctoral Researcher at Ludwig Maximilian University of Munich. Shifting to a career in industry, Lindsey was passionate about working on fast-paced, high-impact projects. She started her Data Science career with Alliance Health, striving to improve the lives of people with chronic health conditions. As the Director of Data Science at HireVue, she is working with her team to completely transform traditional recruiting, interviewing, and training with a video platform that focuses on expression and personality as opposed to just a resume.


Getting Started With Bayesian Statistics in I-O Research

Friday, April 20, 2018
10:00 AM - 11:20 AM
Sheraton 4

Presenter:

Ivan Hernandez, DePaul University

Abstract:

This tutorial will provide participants with an approachable introduction to key Bayesian concepts and their application to I-O and HR professional research. Attendees will learn about the benefits of Bayesian statistics, compared to traditional methods, and become familiar with commonly used Bayesian metrics and tools. Hands-on activities will provide participants experience interpreting and communicating results.

Full Description:

This tutorial offers an approachable and relevant introduction to Bayesian statistics. This growing methodology allows researchers and practitioners to convey their results in more understandable terms to non-technical audiences. In addition to enhancing the communication of I-O and HR research results, the use of Bayesian statistics also helps overcomes statistical limitations that are commonly encountered when analyzing I-O data with traditional methods. Recent advances in computer hardware now allow researchers of any technical background the opportunity to apply these techniques. Attendees gain insight into the benefits that Bayesian statistics provides I-O, learn how to interpret the commonly used metrics in the field, and explore different options for conducting analyses.

Learning Objectives:

  • List the benefits of Bayesian statistics to I-O psychology
  • Describe how to setup simple analyses using popular Bayesian software
  • Explain and interpret two of the most commonly used results metrics in Bayesian - the Bayes Factor and Credible Intervals
  • Apply Bayesian statistic techniques in session demonstration and participant activity

Presenter Biography:

Ivan Hernandez is a Professional Lecturer in the Industrial and Organizational division of the Psychology Department at DePaul University. He completed his doctorate degree in Industrial Organizational Psychology at the University of Illinois in Urbana-Champaign, where his research focused on computational social science applied to organizational behavior including absenteeism and job satisfaction. Following graduate school, he was a collaborative post-doctoral researcher at Northwestern University and the Georgia Institute of Technology with the SONIC and DELTA laboratories where he researched group-level dynamics using computational approaches such as text mining, agent-based modeling, and data scraping. He also supervised the NASA data science internship program at Northwestern University. Ivan has also served as the instructor for Data Science and Python courses at General Assembly in Chicago, IL, where he gave professional training on programming, statistics, and machine learning methodologies.


Machine Learning in R: A Tutorial and Jam Session

Friday, April 20, 2018
11:30 AM - 12:50 PM
Sheraton 4

Presenters:

Allen Goebl, Korn Ferry
Jeff Jones, Korn Ferry
Sarah Semmel, Twitter

Abstract:

Machine learning algorithms are increasingly being used by I-O psychologists and HR practitioners in organizations for workplace decision-making. This tutorial introduces the fundamentals of machine learning (using R) before attendees are split into small groups to practice with a classic dataset.

Full Description:

Machine learning algorithms are increasingly used by organizations for workplace decision-making. This tutorial introduces the audience to cross validation, ensembles, feature engineering, and building machine learning models in R. After reviewing these topics, attendees will be split into small groups to practice what they have learned with a classic dataset. The targeted audience members for this tutorial are intermediate R users who are interested learning more about implementing machine learning models. All materials will be made available on http://bit.ly/2wS93ci.

Learning Objectives:

  • Describe and explain machine learning concepts to colleagues.
  • Apply, build, and train several basic machine learning models in R.
  • Demonstrate and participate in the competitions hosted on Kaggle to improve machine learning skills.

Presenter Biographies:

Allen Goebl is a Manager at the Korn Ferry Institute while finishing up his PhD at the University of Minnesota. He specializes in building machine learning models for selection and engagement decisions as well as developing software for analytics and research. He is proficient in a variety of scripting languages including R, Javascript, and Python. Allen is the lead author and maintainer of the iopsych R package and has contributed to several other open source projects. His research focuses primarily on refining the statistical and psychometric methods used in employee selection.

Jeff Jones is the Director of Talent Analytics and Data Systems at Korn Ferry where he specializes in psychometrics, analytics, and research. He is one of the core psychometricians who leads efforts in designing new tools and scoring algorithms. Moreover, as part of his role, he uses his computational skills to develop applications that are used in demand generation, talent analytics, dashboard design, and automation. He has developed new statistical methodologies in R that have been published in Psychometrika, Psychological Methods, and Applied Psychological Measurement. Jeff received his Ph.D. at the University of Minnesota in Psychometrics and Quantitative Psychology.

Sarah Semmel is a People Scientist at Twitter where she works on a variety of projects focusing on areas such as employee engagement, performance management, and competency modeling. She is finishing up her Ph.D. at University of Minnesota in Industrial/Organizational psychology with a focus on Quantitative psychology. She has previously worked as a contractor on Facebook’s People Analytics team and as an intern for PDRI and Amazon. Sarah spends most of her time working with data and is passionate about using new and novel methods to provide insights into how organizations and employees function.


Tools to Increase Diversity, Utility, and Validity in Hiring Police Officers

Friday, April 20, 2018
11:30 AM - 12:50 PM
Chicago 7

Presenter:

Joel Wiesen, Applied Personnel Research

Abstract:

Many police managers are stymied in their attempts to hire black police officers due to the pervasive adverse impact that traditional employment tests have on black candidates. This tutorial presents 15 tools (most novel or little-used) to help police departments hire ethnically diverse academy classes while maintaining and even enhancing expected job performance.

Full Description:

Many police managers are stymied in their attempts to hire black police officers due to the pervasive adverse impact that traditional employment tests have on black candidates. This tutorial presents 15 tools (most novel or little-used) to help police departments and HR professionals hire ethnically diverse academy classes while maintaining and even enhancing expected job performance. Real life examples are provided to illustrate the use of these tools. The tools are described in enough detail to allow other consultants including I-O psychologists and HR practitioners to use them in fields beyond law enforcement.

Learning Objectives:

  • Describe the conditions under which a relatively low validity test (e.g., r=.15) is expected to have higher utility than a higher validity test (e.g., r=.25).
  • List at least two test areas and two test modes that generally have adverse impact on minorities.
  • Compare the pros and cons of using tests of g on a pass-fail (P/F) basis.
  • Explain why ranking even in part on a traditional test of g generally results in adverse impact on minority candidates.

Presenter Biography:

Dr. Wiesen serves as Director of his own firm, Applied Personnel Research. He specializes in employee selection testing. In 1975, he was hired by Massachusetts to validate its civil service examinations. Since 1993, he has consulted on employee testing and developed written tests for government and business. He has served as an expert on testing matters for two state Attorneys General, the US Department of Justice, other governmental entities, private law firms, and other organizations. He received a doctorate in psychology from Lehigh University and is licensed as a psychologist in three states.


High-Quality Qualitative Research: An Introduction to the Grounded Theory Approach

Friday, April 20, 2018
1:00 PM - 2:20 PM
Sheraton 2

Presenter:

Annika Wilhelmy, Portland State University

Abstract:

This interactive session offers an introduction to grounded theory – an accepted and versatile qualitative research approach. Using the exemplar of a grounded theory study recently published in the Journal of Applied Psychology, participants will learn the steps needed and the pitfalls to be avoided to successfully conduct and publish grounded theory research in I-O psychology.

Full Description:

This session gives an introduction for I-O researchers and HR practitioners to the research methodology of grounded theory. The grounded theory approach is a qualitative methodology that is particularly versatile, internationally well-known, and helpful to study new or under-researched phenomena. Participants will learn how to apply the principles of the grounded theory approach in several interactive exercises and how to most effectively report the method and results of qualitative research. Finally, participants will receive recommendations on how to increase the publishability of grounded theory research in top-tier I-O journals.

Learning Objectives:

  • Explain when it is appropriate to use qualitative methods such as the grounded theory approach in I-O research
  • Apply the key principles of the grounded theory approach to research questions, study design, and data analysis in interactive exercises
  • Critique and evaluate how to improve the chances of publishing grounded theory research in top-tier I-O and HR peer-reviewed journals

Presenter Biography:

Dr. Annika Wilhelmy is a visiting assistant professor at the Department of Psychology of Portland State University and a Swiss National Science Foundation scholarship holder. She is an expert on qualitative research and the grounded theory approach. In 2016, she authored the second purely qualitative study and first grounded theory study ever published in the Journal of Applied Psychology. In addition, she has given workshops on qualitative research at conferences such as the 2017 AOM (Atlanta) and 2017 EAWOP (Dublin). Dr. Wilhelmy earned her Ph.D. in I-O Psychology from the University of Zurich.


Quantitative and Qualitative Data Preparation for Machine Learning Applications

Friday, April 20, 2018
1:00 PM - 2:20 PM
Sheraton 4

Presenters:

Li Guan (Ada), University of Georgia 
Mengqiao (MQ) Liu, DDI (Development Dimensions International)

Abstract:

Machine learning algorithms can be used to dissect, analyze, and reveal insights from data. This tutorial illustrates data processing techniques that help to prepare both quantitative and qualitative data to be used for machine learning applications.

Full Description:

In the field of I-O psychology and Human Resources, the topic of machine learning has been gaining increasing popularity over the past few years given its power to derive insights from data. This tutorial provides an overview of machine learning models and introduces data processing techniques to prepare quantitative and qualitative data for machine learning-related real-world applications. 

Learning Objectives:

  • Define machine learning and its relevant concepts
  • Demonstrate quantitative data preparation for machine leaning algorithms
  • Demonstrate qualitative data preparation for machine leaning algorithms
  • Compare both quantitative and qualitative methods for machine learning

Presenter Biographies:

Li Guan (Ada) is a graduate research assistant at the University of Georgia where she works on a variety of projects on areas, such as personnel selection and analytical strategies, to measure psychological constructs. She is finishing up her Ph.D. in Industrial-Organizational psychology with a concentration on Measurement. Her research has been published in Psychological Methods, Journal of Personality, and Psychological Assessment. She has previously worked as an Intern on DDI’s Testing and Assessment Design team and as well as working as a data analyst for P & G (Proctor and Gamble) and Kimberly-Clark. Ada is passionate about applying data-analytic skills to untangle person-centered issues in organizations to assist with employee selection, placement, training, and development.

Mengqiao (MQ) Liu is a Consultant on the Testing and Assessment Design team at DDI.  MQ’s applied work revolves around the research and development of personnel selection content and tools (e.g., interviewing tools, personality tests, and leadership assessment) that are synergized with cutting-edge technologies. MQ’s research expertise lies in personality and measurement issues around response effort; her research has been published in Journal of Applied Psychology, Journal of Business and Psychology, Journal of Occupational and Organizational Psychology, Journal of Personality and Social Psychology, and others. MQ received her Ph.D. from Wayne State University in Industrial-Organizational Psychology in 2017.


Social Media Use in Selection: The Promise, Pitfalls, Policies, and Legal Protections

Friday, April 20, 2018
1:00 PM - 2:20 PM
Chicago 6

Presenters:

Shawn Bergman, Appalachian State University
Kristl Davison, University of Memphis
Kimberly O’Connor, Indiana University-Purdue University Fort Wayne
Gordon Schmidt, Indiana University-Purdue University Fort Wayne

Abstract:

Social media use in selection is a common HR practice. There are, however, issues with its reliability, validity, and legality, which organizational policies can address. In this multidisciplinary session, we will explore the changing landscape of social media in selection. We will discuss the promise and pitfalls, recent case law, and suggest language for selection policies and training methods.

Full Description:

Social media is being used by HR in new and ever-changing ways. Questions about how organizations use social media to find, recruit, and select candidates have emerged. This has led to innovative applications of existing laws to protect employees from discrimination. Organizations might also implement a social media selection policy to address these issues. This session explores social media use in selection from a variety of disciplines. We will explore new Simple Notification Service practices, best-practice recommendations for practitioners, legal protections, and social media-related policy considerations and training. The multidisciplinary approach to this master tutorial is necessary, and this session is intended to be a basic tutorial for practitioners and conference attendees.

Learning Objectives:

  • List the variety of ways social media can be used for recruitment and selection
  • Compare the positive and negative aspects of social media use in selection
  • Explain federal and state laws that apply to social media and data privacy issues facing organizations during selection
  • Design and formulate social media selection policies for organizations and describe beneficial methods of training employees on this policy

Presenter Biographies:

Dr. Shawn Bergman, Ph.D. is an Associate Professor in the Psychology Department at Appalachian State University. He received his doctoral degree in Industrial-Organizational Psychology from The University of Tennessee, Knoxville. Dr. Bergman is the founder and co-Director of the HR Science Research Team and co-founder and Associate Director for the Center for Analytic Research and Education (CARE) at Appalachian State University. He has published multiple research articles and book chapters on how social media can be used for employee recruiting and selection, and has worked on multiple applied projects investigating how public and private organizations can use social media for education and recruiting.

Dr. H. Kristl Davison, Ph.D. is an Instructor of Management in the Fogelman College of Business & Economics at the University of Memphis. She received her M. S. and Ph.D. from Tulane University in Industrial-Organizational Psychology. She has worked in consulting and industry, including as an Employee Selection Specialist at GTE/Verizon. Dr. Davison is currently serving on the Editorial Board for Organizational Research Methods. Her research interests include legal issues, diversity, personnel selection, ethics, and organizational justice. Her research has been widely cited. She has published several articles and book chapters on the use of social networking websites in HR.

Professor Kimberly O’Connor, J.D. is an Assistant Professor of Organizational Leadership at Indiana University-Purdue University Fort Wayne and an attorney, licensed in the state of Indiana. She received her doctoral degree from Loyola University School of Law. Her research areas include social media and the law, as well as cybersecurity, as it relates to employment.

Dr. Gordon Schmidt, Ph.D. is an associate professor and current chair of the Organizational Leadership department at Indiana University Purdue University Fort Wayne (IPFW).  His primary research area is how social media is changing the nature of company-employee relations today. He co-edited a book published by Springer on how social media is used in selection and recruitment processes by organizations. He has published research on the law related to people fired for social media posts and organization social media use-related policies. Also, he has done research related to the use of crowdsourcing sites like Amazon Mechanical Turk and virtual leadership. Dr. Schmidt received his Ph.D. in Organizational Psychology from Michigan State University.


How to Design, Conduct, and Interpret a Cognitive Task Analysis

Saturday, April 21, 2018
8:00 AM - 9:20 AM
Sheraton 4

Presenters:

Rob Kittinger, Sandia National Laboratories
Daniel Shore, George Mason University

Abstract:

This master tutorial informs I-O psychologists, HR practitioners and applied researchers on methods, previous studies, design considerations, and data analysis techniques for cognitive task analyses (CTAs). More specifically, the focus is on applying the CTA methodology in the context of improving selection and training procedures in organizations.

Full Description:

As technology shifts work away from physical tasks and toward cognitive tasks, the utility of cognitive task analyses (CTAs) for capturing decision-making and problem-solving processes becomes more relevant to I-O. Previously, CTAs have almost exclusively been used by researchers in human factors and related fields. There are, however, beneficial uses of CTAs for I-O purposes—primarily improving selection and training. This master tutorial introduces CTAs to practitioners and applied researchers. In an effort to explain how to design, conduct, and interpret CTAs, the experts in this tutorial will share methods, previous studies, design considerations, and data analysis techniques for CTAs.

Learning Objectives:

  • List the main categories of elicitation methods relevant to I-O psychology and Human Resources that are used in CTAs
  • Compare the pros and cons of each category of elicitation methods
  • Describe how CTAs have been used previously for selection and training research
  • Analyze, quantify, interpret CTA data, and design an effective CTA to answer I-O and HR research questions

Presenter Biographies:

Dr. Rob Kittinger completed his PhD coursework in I-O psychology at Auburn University and Capella University. He is currently a senior member of the technical staff at Sandia National Laboratories where he performs I-O psychology research in support of national security. Over the past three years he has performed myriad cognitive task analyses for employees of various security-related jobs. He also worked for over five years as a civilian for the U.S. Navy where he performed over 35 job analyses, and associated psychometric work, in support of the Navy's personnel advancement system.

Daniel Shore, M.A., is a doctoral candidate in the I-O Psychology program at George Mason University (GMU). He currently serves as a graduate teaching assistant at GMU. For the past four years, Daniel has served as a graduate research assistant on a DHS-funded project that examined the behavioral characteristics associated with effective performance in cybersecurity incident response teams (CSIRTs). Through this project, Daniel developed a cognitive task analysis interview protocol and conducted over 40 interviews using this protocol. Daniel was also a lead author on the decision-making chapter in a managerial handbook on improving CSIRT performance.


Conducting Reproducible Psychometric Meta-Analyses Using R

Saturday, April 21, 2018
10:00 AM - 11:20 AM
Sheraton 4

Presenters:

Brenton M. Wiernik, University of South Florida 
Jeffrey A. Dahlke, University of Minnesota

Abstract:

In this tutorial, we show how I-O researchers and HR practitioners can use R to streamline meta-analysis workflows and enhance the accuracy and reproducibility of psychometric meta-analyses. After a brief overview of the principles of psychometric meta-analysis, we show how a new R package—psychmeta—can automate or simplify many steps of the meta-analysis process. Example R scripts will be provided.

Full Description:

Meta-analysis is a key way that I-O psychology and allied disciplines build cumulative scientific knowledge. However, many aspects of the process of conducting meta-analytic research remain extremely labor-intensive, and the widespread use of researchers’ self-programmed meta-analysis calculators hinders the accuracy and reproducibility of meta-analytic results. This tutorial demonstrates how new open-source tools facilitate faster, more efficient, more open, and more reproducible meta-analysis research using R. The presenters will demonstrate how their “psychmeta” R package and other R resources can be used to manage databases, compute individual-correction and artifact-distribution meta-analyses, perform sensitivity analyses, and generate publication-quality tables and figures.

Learning Objectives:

  • Describe the fundamentals of psychometric meta-analysis models.
  • Demonstrate how to import meta-analytic databases into R.
  • Discuss how to choose meta-analytic models and estimate them in R.
  • Prepare meta-analytic results for publication and presentation.

Presenter Biographies:

Jeff Dahlke is the lead developer of the psychmeta package for conducting psychometric meta-analyses in R. He earned his M.A. degree in I-O psychology from Minnesota State University, Mankato and is pursuing a Ph.D. in I-O psychology at the University of Minnesota. His recent research uses meta-analytic methods to explore moderators of subgroup mean differences, explanations for time-related fluctuations in the validity of cognitive tests, and the effects of statistical artifacts on differential-validity estimates. Jeff maintains an active program of research on quantitative and psychometric methods, including the development of new meta-analytic techniques and multivariate corrections for psychometric artifacts.

Brenton Wiernik is Assistant Professor of Psychology at the University of South Florida. His research focuses on developing and applying meta-analysis and other quantitative methods. He also studies how individual differences, including interests and personality, impact career behavior. He is a developer of the psychmeta open-source software package for psychometric meta-analysis. His work has appeared in journals such as Multivariate Behavioral Research, Journal of Vocational Behavior, Journal of Managerial Psychology, Career Development International,and Annual Review of Organizational Psychology and Organizational Behavior,as well as numerous scholarly books. He earned his Ph.D. in I-O psychology from the University of Minnesota.


Advanced Uses of Mechanical Turk for Data Collection

Saturday, April 21, 2018
3:00 PM - 4:20 PM
Sheraton 4

Presenters:

Justin Wiegand, University of Illinois at Urbana-Champaign
Cory Kind, CEB, now Gartner

Abstract:

Mechanical Turk (MTurk) is widely used by I-O researchers for data collection. This session will provide tips and tricks to help extend MTurk’s capabilities. Topics include: (a) an introduction to the MTurk API, (b) a tutorial for using MTurk API tools for advanced data collection (e.g., longitudinal and dyadic data), and (c) how and why to manage your reputation as a Requester.

Full Description:

Mechanical Turk (MTurk) is widely used by I-O researchers for data collection since it offers researchers access to a large, diverse participant pool and more control than panel surveys. However, the burden of managing an MTurk survey can be severe, causing researchers to spend more time on data collection and less on generating analysis and insight. This session will offer practical tips and tricks to help researchers collect data on MTurk easily and successfully, even for studies that have intricate or complex data collection needs.

Learning Objectives:

  • Describe the basics of API (Application Programming Interface) and how it can be used to make interacting with websites faster and more efficient
  • Outline a simple MTurk workflow in MTurkR (open source R package) and TurkPrime (a web-based software service for Requesters), both of which interface with MTurk using the MTurk API
  • Demonstrate how both of these tools can be used for longitudinal data collection
  • Explain scripted solutions for dyadic data collection (e.g., supervisor, spouse) using open source survey software
  • Describe how to implement key strategies for monitoring and improving Worker feedback

Presenter Biographies:

Justin Wiegand is an instructor and Human Resources and Industrial Relations Ph.D. candidate at University of Illinois' School of Labor and Employment Relations. His research centers on person-environment fit, its measurement, and application to vocational interests, union participation, and narcissism. He has conducted multiple MTurk studies using the MTurk API and unique open source survey solutions using LimeSurvey.

Cory Kind is a Research Scientist with Gartner’s Talent Assessment Product R&D Group, where her work focuses on applying natural language processing techniques, machine learning, and advanced statistical modeling to develop innovative assessment products in close collaboration with I-O Psychologists. She has managed more than 15 Mechanical Turk studies over the past three years. She earned an A.B. from Harvard University and a Master of Information and Data Science degree from the University of California Berkeley.


Return to SIOP 2018 Conference