You fire up Claude or ChatGPT somewhere around the time you're building slide content or writing a quiz. Maybe you use it for a storyboard draft. That's where most instructional designers are using AI… and it's making sense, because that's also where the workload is heaviest.

But there's a problem with that pattern.
Design and Development are where you execute decisions that were already made. If those decisions were based on weak analysis; like a rushed needs assessment, assumptions about the audience, a performance gap that was never clearly defined; AI just helps you build the wrong thing faster.
Where IDs are actually using AI
Survey data from practitioners tells a consistent story: the overwhelming majority of AI use in instructional design clusters in the middle of ADDIE - Design and Development. Content drafting, script writing, quiz generation, storyboard outlines. All useful. All downstream.
What's getting skipped: Analysis and Evaluation.
Analysis is where you figure out whether training is even the right solution. It's where you interview stakeholders, define the performance gap, map the audience, and set the conditions for everything else. Evaluation is where you close the loop. Determining whether the training actually changed anything.
Both phases are time-intensive, often under-resourced, and historically hard to do well. They're also exactly where AI can give you the most leverage; not by doing the work for you, but by dramatically shortening the time it takes to do the work right.
What AI-assisted Analysis actually looks like
Here's a concrete example. You're scoping a new training request. Instead of starting with a blank stakeholder questionnaire, you bring your notes from a quick sponsor conversation into Claude and prompt it to surface the likely root causes, flag where you need more information, and draft a set of clarifying questions to take back to the business.
That's not AI automating analysis. That's AI acting as a thinking partner so you show up to the next conversation sharper.
A few other high-leverage applications most IDs haven't tried yet:
Use AI to synthesize SME interview transcripts. Drop in a recording transcript and ask for recurring themes, terminology gaps, and contradictions between what different SMEs said.
Use AI to audit learning objective alignment before you build anything. Give it your objectives and your business goal and ask it to identify where the logic breaks down.
Use AI during Evaluation to analyze open-ended learner feedback at scale. Survey responses that would take hours to code manually can be meaningfully organized in minutes.
None of this replaces your judgment. It accelerates the thinking that supports it.
The reframe worth holding onto
AI is most powerful at the phases where you currently have the least time and structure — not the phases where you already have templates and momentum. The development phase has authoring tools, existing workflows, established habits. Analysis and Evaluation are still mostly improvised.
That's the gap worth closing.
Prompt of the Week
Use this after a stakeholder kickoff or SME conversation. Paste in your notes and run it before you start building anything.
I'm an instructional designer scoping a new training project. Here are my notes from a stakeholder conversation:
[paste your notes]
Based on these notes, help me:
1. Identify the most likely root cause of the performance gap described
2. Flag any information I still need before I can confirm that training is the right solution
3. Draft 3–5 clarifying questions I should bring back to the stakeholder
Don't assume training is the answer. Surface any signs that this might be a process, tool, or motivation issue instead.
If you want a structured way to build AI into every phase of your design process — not just the middle ones — that's exactly what the AiDDIE Reset walks through. It's a free 5-week email course built around ADDIE. You can sign up at aiddie.co.
— Gus
A reminder that the most valuable things AI does in your workflow are usually the things you haven't tried yet.
