Case Study · Tekion · 2024–2025

Integrating AI When Your Team Is Afraid

TL;DR Eian Newland integrated AI into Tekion's L&D content development through a three-phase framework — Pilot, Expand, Govern — resulting in 31% development time reduction, 23% less administrative burden, and zero team turnover during the transition. The hardest part wasn't the technology. It was managing the human dynamics.

The challenge

By mid-2024, AI tools had matured enough to be genuinely useful for content development. But my team had legitimate concerns: quality degradation, IP compliance, job security. Leadership wanted faster output and lower costs. I needed to adopt AI responsibly while keeping the team engaged and maintaining the quality standards we'd built.

The teams that were falling behind on critical projects were the ones not using AI tools. The teams experimenting were starting to pull ahead. The gap was becoming visible, and I knew the adoption question wasn't whether but how.

The approach

I framed AI as a workflow optimization, not a workforce reduction. The pitch to the team: "AI handles the parts of your job you don't love, so you can spend more time on the parts you do."

But I didn't just say it. I sat in check-ins with the teams that were falling behind, made them open ChatGPT and Claude side by side, and we experimented together in real time. Some prompts worked. Others produced absolute nonsense. We kept going until we found what was useful. That wasn't a strategy. It was instinct. I learn by doing, and I model that for my team.

The three-phase rollout:

  • Pilot (2 months): First drafts, metadata generation, translation assistance. Two team members, three defined use cases. Measured quality AND speed — not just speed.
  • Expand (3 months): Full team adoption with documented workflows, custom prompt libraries for each content type. Same quality gates for AI-assisted and human-only work.
  • Govern (ongoing): AI governance framework, IP compliance, audit trail. Regular review cycles. Transparent communication about what AI was and wasn't doing.

The execution

During the pilot, we caught three instances where AI pulled language from competitor materials. If we'd rolled out to the full team without guardrails, those would have been compliance violations. Starting governance in Phase 1 — not Phase 3 — was the right call.

We built custom prompt libraries for each content type. Created review standards specifically for AI-generated output. Weekly retros captured what worked and what didn't, which accelerated the feedback loop considerably.

The team sentiment data was as important as the output metrics. We tracked both. No one left because of AI anxiety — because they were part of the process from the beginning, not handed a mandate from above.

The results

31% Reduction in development cycle time
23% Reduction in manual administrative burden
100% SOC2/ISO audit compliance maintained
0 Team members lost during AI transition

The insight

The hardest part of AI adoption isn't the technology. It's managing the human dynamics. Leading with transparency, involving the team in implementation decisions, and measuring quality alongside speed made this a team success rather than a leadership mandate.

If I could redo one thing: start governance in Phase 1, not Phase 3. Build the prompt library before expanding to the full team. And measure team sentiment alongside output metrics from day one. The compliance catches during pilot validated the approach — but only because we were looking.