Last week I spent two days at eMerge Americas in Miami Beach. Twenty thousand people, sixty countries, every flavor of AI conversation you can imagine. One session changed how I think about L&D strategy for the next three years.
Delta Air Lines presented something they call the "Single View of the Employee." One unified data profile per person that pulls from every system: hiring, onboarding, training, performance, career moves, internal mobility. Everything in one place, machine-readable, actually usable.
Sitting in that room, I kept thinking: what would a Single View of the Learner look like?
Not a transcript of completions
A real Single View of the Learner is not an LMS report dressed up with a new name. It is a profile that tracks how a person actually grows over time. Skills demonstrated on the job. Coaching conversations. Stretch projects. Mentorship moments. The informal learning that happens between formal events. AI-assisted feedback loops. All of it tied to one person, one trajectory.
Most L&D teams cannot build this today. The data lives in seven systems and none of them talk to each other. The LMS knows about completions. The HRIS knows about job changes. Performance reviews live in a separate tool. Coaching notes are in someone's notebook, or not captured at all.
The pattern across every session
Here is what I noticed at almost every AI session at eMerge: the organizations actually scaling AI solved the data foundation problem first. Consolidate, standardize, make it machine-readable, then build. The ones who skipped that step are stuck in perpetual pilot mode. AI demos that never become AI workflows.
Noelle Russell put it cleanly. There is a gap between AI excitement and AI impact, and the bridge is measurement that proves value rather than measurement that counts activity.
What this means for your work
If your team is trying to use AI to personalize learning, recommend content, predict skill gaps, or coach managers, the bottleneck is almost never the AI model. It is the learner data underneath. A great recommendation engine on top of fragmented data produces fragmented recommendations.
So the practical move for any L&D leader thinking about an AI strategy is to start one layer below the AI layer. Audit what learner data you have, where it lives, and what it would take to consolidate even a small slice of it. You do not need a Single View of the Learner on day one. You need a single view of one important learning outcome, in one machine-readable place, that you can build on.
That is a strategy. The rest is theater.
PROMPT OF THE WEEK
Use this when scoping an AI initiative for your L&D function. It forces you to ground the idea in data reality before building anything.
You are an L&D strategist. I want to use AI to [describe the AI use case,
for example: recommend learning content to employees based on their role
and goals].
Before I build anything, help me audit the data foundation this requires.
List:
1. The specific data inputs the AI needs to work well
2. The systems where that data likely lives in a typical enterprise
3. The most common gaps or inconsistencies in that data
4. The minimum viable data set I could start with
5. Three risks if I build the AI on top of incomplete data
Be specific and practical. Assume I have a real L&D function, not a
research lab.
This is the prompt I am running on every AI initiative we are scoping right now. It saves weeks of pilot work that would otherwise stall.
One ask
If you want help running a real audit on your team's AI readiness, that is exactly what the AI Readiness Audit is built for. Reply to this email and I will send details.
From, Gus
P.S. The notebook I came home with is dense. Expect a few more issues drawing from it.
