October 16, 2024

How we used AI to improve course quality by 20%

hero image for blog post

Recently, we shared how our Education team uses AI to build courses more quickly. But embedding AI in our workflows isn’t just about speed – it’s also increased the quality of our courses. This week, Education Product Lead Kyra Atekwana shares three AI use cases that have increased the quality of our course content, as measured by student satisfaction.

How AI improves our course development process

In the second half of this year alone, our education team is set to develop 14 net new workshops, up from 9 in H1. Traditionally, we’ve relied on expert executives (our instructors) to provide subject matter expertise, which means our education team co-built each course with the instructor.

But this has been tough when teaching AI – there are fewer AI experts available, they’re harder to find, and waiting for the expert before starting the development of each course prevents us from building out our AI curriculum as quickly as we need to.

This became especially clear this summer, after we launched two back to back workshops that didn’t meet our quality bar (60+ NPS). At that point, Kyra set out to see if we could use AI to help us develop more of our AI content faster, before we’d found an expert to teach the course. .

Her hypothesis was: If we create high-quality drafts of workshop decks that instructors can build on, rather than having them start from scratch, we can set a higher content quality bar upfront and better use instructors’ time and expertise, focusing them on practical demos and real world examples from their experience in the field.

To do this Kyra broke the workshop development process down into three phases: Creating an initial brief with fleshed out learning objectives, defining the core framework of the course, and developing the use cases we’ll teach. Then, she built  a custom GPT for each phase: the Workshop Brief Architect GPT, the Workflow Architect GPT, and the Use Case Architect GPT.

GPT #1: Creating better workshop briefs

Without a strong initial framework, instructors defined the learning objectives for their courses on their own, often resulting in content that didn’t hit the mark. Our team would sometimes work up until the day of the workshop trying to improve the content quality of courses built around poor original assumptions.

Kyra developed the Workshop Brief Architect GPT to create a comprehensive brief including these learning objectives. To use this GPT, our team does background research on the general topic they’re building a course for (i.e. AI for Writing). They get a core understanding of what job functions will benefit from this course and what their challenges are. Then they’ll feed this information to the custom GPT and ask it to do three things:

  1. Define core learning objectives: What is the core purpose of the workshop and what will students walk away with?
  2. Target an audience: Who is the workshop best suited for and how can we make that as clear as possible?
  3. Clarify scope and expectations: Our negative course feedback is often based around misconceptions of what you’ll learn or what prior knowledge is needed.

The outcome is a clear, pointed workshop brief in under 15 minutes. “Creating 75% of a workshop without being an expert in that field was a game-changer,” Kyra said.

GPT #2: Developing the core framework

Kyra was focused primarily on our functional and skills AI courses – courses like AI for Writing, AI for Product Managers, and AI for Problem Solving. Core to these courses is teaching students AI use cases – the typical workflows in their day-to-day that can be augmented by AI.

However, in the feedback for those workshops that didn’t perform well, Kyra saw a common theme – students weren’t happy with the use cases that were taught. In some cases they were too basic, in others they weren’t applicable to the role or skill being taught. .

Therefore, Kyra built a Workshop Workflow Architect GPT to turn the initial brief into a detailed overview of the AI use cases taught in each course. The GPT:

  1. Identifies key workflows: Using the background research and internal insights, it picks out the main workflows for a given function (e.g. campaign performance for marketing professionals).
  2. Pulls out the components: For each workflow, the GPT details specific steps, challenges, and sub-components (e.g. understanding customer segments of a campaign).
  3. Identify ways AI can augment these workflows: The GPT right sizes the workflows to AI’s capabilities and the course’s learning objectives (e.g. AI can analyze campaign data and pull trends).

We’re still having our instructors review these workflows and layer on their point of view about which are most ripe for AI – but the workflows identified by AI have consistently been better received (based on NPS) than the ones that were developed in a vacuum by our team or the instructors.

GPT #3: Sharpening use cases

Once we’ve chosen the workflows that AI can augment, we still have to teach students how AI can augment these workflows – the AI use cases. However, our Education team doesn’t have experience in every role in an organization (marketing, product, sales, etc.) – so when we were generating these use cases on our own or with instructors, the use cases often felt too theoretical.

Therefore, Kyra developed one last custom GPT, the Workshop Use Case Architect GPT, to generate detailed, contextually relevant AI use cases for each workflow component identified by the Workflow GPT.

The Use Case Architect provides 3 outcomes:

  1. Possible use cases: The GPT generates several use case options for the team to review.
  2. Contextualization on relevance: By linking each use case to a specific workflow and challenge, the GPT ensures that the solutions are both feasible and impactful.
  3. Feedback driven insights: The GPT can iterate on its suggestions based on past student feedback, to avoid recreating negative experiences.

The Use Case Architect GPT helped significantly improve the relevance of workshop content, leading to higher student satisfaction and stronger NPS scores.

The ROI: Consistent high quality course content

It took Kyra 3 weeks to prototype and hone these GPTs, and she’s still improving them. But we’ve already seen ROI. First, we’re saving time.

“Previously, it took us five weeks to develop content for one workshop. Now it’s just a few hours,” Kyra said. “Instead of spending time developing content, we now spend time refining and improving it.”

Second, workshop quality has increased. For example, NPS for our AI for Product Managers workshop improved from 57 to 73 after rebuilding the material using this process.

“Our goal was not just to save time, but to ensure quality and relevance,” Kyra said. “We’re now able to maintain and even improve the quality of our workshops, all while creating content faster and with more precision.”​

How you can replicate Kyra’s success:

  1. Create GPTs for each part of your workflow: The reason Kyra created 3 custom GPTs is because you get better results when asking it to accomplish one specific task. If your instructions get too long, it won’t be able to follow them consistently and the outputs won’t always be replicable.

  2. Focus on clarity over complexity with your instruction prompts: More information is not always better with custom GPTs. You should give it less information to pull from and that information should be focused and clear. This allows it to do one thing really well and prevents from referencing unnecessary context.

  3. Test iteratively, not at the end: Build out a few steps and see how they perform before developing the rest. This helps you understand where a GPT gets confused more quickly and more smoothly scale your GPT up.
Greg Shove
Section Staff