Master systematic prompt engineering beyond basics. Learn to select AI tools, format prompts for different contexts, standardize outputs, and apply chain-of-thought techniques. Explore context versus prompt engineering, avoiding hallucinations, and agent capabilities through interactive demonstrations. Gain immediately applicable skills for research and organizational applications.

Move beyond basic AI usage to mastering sophisticated prompt engineering techniques for complex organizational challenges. This interactive workshop targets I-O professionals ready to leverage AI’s full potential through systematic prompt design, testing, and optimization.

Participants will explore choosing appropriate AI tools for different tasks, adjusting prompt content and formatting based on purpose and context, and standardizing AI outputs using examples. Advanced topics include chain-of-thought prompting to improve reasoning quality, evaluating AI responses effectively, and understanding context engineering versus prompt engineering. You’ll also learn how to understand and avoid hallucination as you build your enhanced workflow through AI agents.

Through live demonstrations and activities, you’ll develop practical skills immediately applicable to research, assessment, and organizational projects. The session emphasizes transferable principles over tool-specific details, making advanced techniques accessible whether you’re following along on a laptop or absorbing comprehensive demonstrations. Leave with actionable strategies for optimizing your AI-assisted workflows.

Presenters

Applied Scientist
Yi Wang
Amazon
Senior Data Scientist
Suki Wang
Roblox
Associate Director of People Analytics
Justin Purl
Gilead Sciences

Level

Introductory

Learning Objectives

  • Select and justify appropriate AI tools for at least three different I-O applications based on task requirements and contexts.
  • Evaluate AI-generated outputs using two to three systematic criteria.
  • Design and format prompts for complex organizational tasks using at least two advanced techniques to improve and optimize output quality.
  • Distinguish between context engineering and prompt engineering approaches, and identify when to apply each method in their own work scenarios.
  • Implement standardized prompt structures and few-shot examples to improve the consistency of AI outputs across at least two common I-O tasks (e.g., survey development, literature review).

Topic

2026 Annual Conference

Date

May 1, 2026

Time

2:00 p.m. - 5:00 p.m.

Delivery Type

In-Person

Workshop Coordinator

Wenmo Kong, Meta Platforms