Empowering Instructional Designers with AI: A Conversation with Gerry White

Welcome to CommLab India’s eLearning Champion video podcast featuring Gerry White, the Dean of Academic Technology at ECPI University and a respected voice on the practical application and future of artificial intelligence. As a technologist and educator, Gerry guides students and faculty on how to use AI wisely and creatively. He is also a published author known for making complex technological ideas approachable and human centric.
Click Here To Read Transcript
CommLab Podcast with Gerry White
Hello and welcome back to the eLearning Champion podcast, where we deep dive into strategies, trends, and triumphs shaping the world of digital learning. Now let's gear up for the AI powered learning revolution. Trust me, you won't want to miss the session today. Our guest is Gerry White, the Dean of Academic Technology at ECPI University and a respected voice on the practical application and future of artificial intelligence. As a technologist and educator, Gerry is on the front lines guiding students and faculty on how to use AI wisely and creatively. He is also a published author and writer known for making complex technological ideas approachable and human centric. Gerry's unique blend of deep technical insight and philosophical perspective is exactly what we need to cut through the noise. Today, he's here to talk about how AI empowers and elevates the work of instructional designers and L&D leaders, not replaces it. Gerry, welcome to the show.
Gerry White 6:36
Thank you for having me, Sherna.
Sherna Varayath 6:39
Before we dive in, make sure you're a true eLearning champion by hitting that Subscribe button. Now getting started, Gerry, many L&D professionals feel a mix of excitement and fear that AI is either going to solve all their problems or take away their jobs. From your perspective, where does the real value of AI lie for an experienced instructional designer and what non-negotiable human skills does it augment versus automate?
Gerry White 7:11
Well, first off, everyone has that same excitement and fear for this technology, but definitely in the education space with instructional design, it can make one person much more productive and much more effective at what they're building. There's the number ratio, of course. What you could do in 10 hours, you maybe can do in 30 minutes now, what 10 people can do, maybe one person can do. So everything is going to move around a little bit, but if people stay up with the technology, and with AI that means pretty much daily staying up with it, they'll be able to leverage their knowledge already in the field to build great classes and great design for their students.
Now as for the skills, empathetic reasoning is probably one of the most important things in anything that we're doing with AI right now, and that's a skill that AI will not be able to do in the foreseeable future of 5 or 10 years. Who knows after that what they might be able to do? So, designing things with the student in mind is very important. It always has been. But now a designer can do things like create learning personas, let me create five different types of students, and run them through this content. Did this student hit the mark? Did this student hit the mark? What specific type of applications worked well with what type of learner and which learning persona? That really helps personalize the experience for the student.
Sherna Varayath 9:00
Wow, interesting. Now an ID’s core skill is analysis, design, and evaluation. Could you walk us through one high leverage task, like needs analysis or rapid prototyping, and explain how a tool like Generative AI transforms this process from hours to minutes?
Gerry White 9:25
Well, the prototyping would come first. Everybody's familiar with having a template for your presentations, whether it's a PowerPoint template, a Keynote template, or whatever it is, we can have or create whole templates for entire workflows with different types of assignments, whether they're scenario-based or writing-based. Just fill in the template, hit a button, and it will spit out multiple types of assignments.
As for the analysis piece, as I mentioned a little while ago, as you refine the learning personas that you've created, you can run them through the gauntlet of these prototypes, then go in to fine tune them. A lot of what people will be doing is not just developing and analyzing, but curating the content for different times, which one month to the next month can really change things rather quickly. As for the time that a person spends on it, you can create a system that can almost update itself, especially for new models.
If GPT goes from 5.0 to 6.0, you might write one like an agent that takes your experience from that 5 to 6 so that it doesn't change too much for the user. I know a lot of people when they jump, they have this huge prompt library and once they change a version, everything switches around and a lot of people aren't comfortable with that, but it's a way to do that. And as for analysis, when you create the benchmarks that you want these to hit, how you're measuring the learning outcomes, how you are measuring the value of the prototypes, you can create analysis modules that you could run a whole workflow that we just talked about, you could run it through, and it would give you metrics for different majors. How does our computer science majors do into this? How are humanities majors do into this?
And all of this is going to save time. More than just time, this technology will allow us to do a lot more. And that's the same thing we're asking students to do. A lot of people are, Oh no, students are just going to use this to run away and get to do their assignments or it's going to take my job. What we can do now is require a lot more of our students and a lot more of ourselves. So you can dive much deeper. Where you might have gone 2 directories deep into an idea, maybe now we can get much deeper into that on the ID side and then on the user or the student side as well.
Sherna Varayath 12:22
OK, now many of our listeners are skilled at content creation. In an AI powered world, what new essential skill, prompt engineering or data-driven editing or maybe something else, must IDs develop to ensure that AI generated content is accurate, ethical, and pedagogically sound?
Gerry White 12:49
All of those are very important. As far as content creation goes, creativity is still the most important thing, but prompts are how you're going to create everything. I have a feeling that the language model is going to be the layer in the stack for every way that we interact with computers. So keyboards will disappear, phones will disappear. It will all be through that natural language processing. So the way that you design and we don't have to call them prompt engineering, we just call them questions.
For the first time in maybe 2000 years, the question is more important than the answer. So how do you ask it to do exactly what you want it to do, to be very concise, very specific. And you will get better at it, just like the students will get better at it as you work through that. Now, as for ethics, that comes on both sides of the equation. You want to make sure that you create what you're doing. You give attribution. It's one of the things we teach our students. We still need to do the same thing if an AI creates a paragraph or an image, we want to make sure we give attribution to that, not just because it's the right thing to do, but to show students that this is a great tool. It would have taken me 5 hours to do this in Photoshop. So if you're an ID and you're creating a PowerPoint to go along with an exercise that will end up being a test, if you're creating 20 images, if you don't have just a stack of stock stuff, everybody wants something more custom than stock, and it takes hours and hours. But now you can put things together much more easily, so the way that you construct that question that you asked. I want to show a student writing on a tablet, interacting with an AI on a computer screen in the classroom with the teacher up front, helping two students. I mean, you get the exact picture.
And probably and not to the near future, the pictures can be live and personal to the student. So at a certain point you can write it in the assignment that it will know the student, and it'll feed it an image so that it's wow, it goes right with the students, maybe they're extracurricular. If they're a gamer, it shows it to them in a gaming analysis. But attribution is the key with that and on the back end, we have to be careful. Think about all the lawsuits that are coming up from writers, film studios, artists. They're trying to sue these AI companies all over the world and nobody's really, really settled that yet.
Is that the same thing that we do? The only thing we can say is a combination of things that we've already heard and we're going to put in a different order. So school is still out on how that's going to shake down. But what we can do is make sure that we are creating everything in the assignment. When the AI helps us, we give that attribution. And to make it as personalized for that student experience, it's so much easier and quicker now.
Sherna Varayath 16:28
Now personalization has always been the L&D Holy Grail. How can training managers realistically leverage AI today, even with simple tools, to move from a one-size-fits-all training to genuinely adaptive and personalized learning paths for employees at scale, something in the lines of what you just shared just now?
Gerry White 16:57
Well, first off, you would decide what you want to use to build this experience. Am I going to build an agent or am I going to build what a GPD would be, which is just a little chat program that you would write, or it could also be a stack of agents throughout a process. So let's think of it like if you're going to do a paper, you might build a brainstorming agent. And you could get the student to answer a few questions on the front end, like, what's your major? Out of these three things, which would you rather do? And once you have a system built behind that, you won't have to ask the students those questions because it will remember everything that that student has used in that program in the context. For example, if the school uses ChatGPT and they have it for their back end, that user will already be stored. So it will serve up brainstorming techniques and strategies that fit with that student's life and the studies that they're doing.
The next step, maybe a peer review step. You could construct different peers. Of course, it never is better than a real person, but they could construct different peers. It might be by majors. They might construct peers based on people they think they might work with once they graduate.
And then they run that draft of that brainstorming through that, and it offers suggestions on how to improve this to go to a rough draft. Then you get to the drafting agent, right? And then, once they've written that, you can program that to do whatever you want. In writing it would be, let's look at the logic. Do I have basic things like is my introduction contextual? Do I have a thesis? Do I have body paragraphs? Is it an argument? What type of argument? So big concerns. Then it moves in and again the analysis part on the front end will take that in and offer suggestions. The student decides and then it moves on to the revision agent and there they'll go in and change what they think they should. They won't change things that they don't want to change. And remember, just because a person or an AI tells you something should be different in your work, doesn't mean you have to do it. You want to listen to everyone, but everyone is not always right. So they revise those things they want and then hopefully there would be a teacher in the loop. But if not, you could create that agent on the very back end to come in and check all the marks along the process and maybe help them.
If they know mechanical things that that student has trouble with, like run on sentences or sentence structure, it could tell them, “Remember you have issues with this, go back through and look at that” instead of just correcting it for them. So, it’ll reinforce the final proofing stage of that. I have seen it with a system that keeps everything the student has done in context, that’s probably where everything is going to go for me and you and for students where you will just have your AI and it will know everything that you've done. In school, it'll be everything that you've done in school, but you'll also have one just on your phone that remembers everything you've talked about, everything you've looked at, every website you get into, everything you talked about with your friends. And imagine being able to roll that into your assignments, like remember when Brooke and Ryan were at the restaurant last week? That'd be the perfect thing to plug in to this scenario assignment. So it's personalization on a level that no one has even imagined before, using some of the things that social networks leverage so well, which are those algorithms to feed us things. We could construct essentially algorithms to feed into the assignments, the student’s hopes, wishes, subjects, topics of interest, strengths, weaknesses, all those things.
Sherna Varayath 21:04
Data privacy and intellectual property are major concerns for many organizations globally. When using AI to generate proprietary training content, what are the top two critical ethical and security considerations that one must address before scaling the AI adoption?
Gerry White 21:48
Well, at the institution level, there are a couple of different ways. One way would be to go with one client, like go with Open AI, go with Perplexity, go with Gemini and then they can have their own silo of that model and then whatever they do doesn't get used in the training model and supposedly doesn't get shared with the company as a whole. Now that's what they say. So that can make sure that everything that you're doing as far as very proprietary things, especially if you take the time to build something like we just talked about that string of agents for a paper.
You want that to stay with your students. It won't be able to be accessed by anyone outside if you silo it off like that. I've had some experience with that, and it tends to work. Now if you don't have that, there are certain things you can do. You used these models, you can check that you don't want your information used in the training data of these models. Again, whether that happens or not, I don't even know how we would check that. But don't put anything in a public demonstration or in a public talk like me and you, right now. If I just pulled everything up and we looked at the code of all those agents and if you're worried about it, you don't want to do it that way.
Now, ethically, it's just like it's always been with writing and taking code. You want to comment things out. If you're going to use someone's work, you want to ask them, unless it's already public. If it's already public, you want to give it attribution.
But if you see an idea somewhere at a talk or on a podcast, especially if it's specific, contact the person. I love for people to contact me. I'll help anybody with anything. So to make sure it's theirs. But I think if at an institution level it would be selecting one or maybe a couple. I do know some places that I've left it on the onus of the student. They get them an account. They're still at the beginning of this process, but they'll have prompt libraries for different types of assignments. That's at the very beginning stages before you move into the enterprise.
Sherna Varayath 24:21
Coming to the evolution of skill set, the core skills of an ID are still crucial. In the next three to five years, which existing skill, for example, scenario writing, emotional intelligence, or impact measurement, do you think will become more valuable and maybe difficult for AI to replicate? So then the focus shifts to making it an essential skill for professional development.
Gerry White 24:58
I think anything that has to do with measurement, data, or statistics, it's going to be done better by AI, no matter what it is. We can spin it any way we want. It's going to be able to crunch the numbers better and visualize the data in ways that we want, and even imagine ways to visualize the data that we won't even imagine. It could even serve up personalized data imagery to an ID so that they can see it the way that works best for them. But human to human, we mentioned this before, those skills like emotional intelligence, empathetic reasoning. How do you develop empathy and humanity? I know that sounds touchy feely. People want to stay away from that, but that's the part that AI can't do, and it's the part that we don't want to lose.
Because why are we doing any of this if it's not for us? We're the humans, right? So the design skill, maybe even understanding some basic psychology, basic sociology, and trial and error with real people. You talked about scenarios, constructing multiple scenarios and still the AI is nowhere near as creative as a person when it comes to things like that. It can do some things, but create, it can't. It will never tell you anything creative that you want. It will tell you what you want, which is dangerous. But it cannot have an idea by itself, and that's what so many people are forgetting. It can't do anything without you telling it what to do. So you have an idea. Hey, let's do a scenario, something as simple as building an interview scenario. I'm going to construct 3 different managers that work at AWS and they're going to be hiring entry level cybersecurity. One is a guy who had a good day, it's early in the morning and you construct. One is a guy who's been there for years. And then another one might be this guy's had a horrible day, this is the last thing he has to do. And a machine couldn't have come up with that idea. So you build these things, and you run a person through the gauntlet of these interviews with all three different audience types.
And then again, it's the creativity and person to person skills that we don't want to lose. I know a lot of people don't like to hear it, but once again, unfortunately or fortunately, if it has to do with numbers or just simple things that we would do repetitively, AI is always going to do a better job of that.
Sherna Varayath 28:02
Yes, that's so rightly said. Now if we look beyond pilot programs, for an L&D team or for somebody in instructional design that has been dabbling with AI tools to test, what is the single most important step or first step they need to take right now to transition from their experimentation phase into a sustainable strategic integration of AI across their ecosystem?
Gerry White 28:37
I think for the IDs themselves, one thing I would tell them all to do is to look into what's called vibe coding. If you don't know what vibe coding is, it's basically using prompts, natural language to create systems. So you're not only limited by the tools like we might have used, Coursera or PowerPoint and all of this, you can create a tool specific to what it is you're doing, and some of these are amazing. ChatGPT has a version. Of course, Gemini has a version. Replit is one. Creative AI is one and you just start building the system by talking about it. So it won't only show how quickly you can build something that's way more scalable than anything we've built before, but it's going to teach them skills maybe they didn't have before, like why or where in the stack does the database come? Where's the security layer? Where's the personalization layer? Where am I going to put that? And as they do this, whatever program they're using, it will show them how it works. It will explain to them, maybe if you move this here or maybe if you put this on the front end, it might help in this way. So it's going to make them a lot more conscious of a big ecosystem. So you could have that agent stack that I talked about for English, build that whole thing as one program instead of just stringing agents like in a canvas or something like that.
Let's build this system for all the papers that students write across the enterprise. For coding, let's build one for all the programmers. So I do this in this language. Now do it in this language and in any discipline, whether it's medical, business, whatever it might be. So I would check those out. And if you have any trouble with this, just ask your model. If you're in ChatGPT, say what is the best solution using ChatGPT to vibe code a system that I could build for students who are business majors trying to do a SWOT analysis or whatever it might be, and it'll pull the tool up. If they don't have that type of account, it'll tell them what type of account and it'll tell them free resources if they don't want to get that type of account. It will guide them through. Now on a level like say me or you, we could build something in ChatGPT that teachers or IDs could use where it asks those questions. What type of system are you trying to build? And then it will personalize it that way. Dabble with it, which is what I tell everybody. Play with this stuff. Don't be afraid of it. And the quickest way to get comfortable with it is to talk to it and talk with it. Turn the voice mode on and just tap it. That's the way I use it. I put my phone up here and as I'm writing, doing anything, I'm constantly talking to it. Go look this up. Go look that up. What's a better way to do this? So playing with all the tools, but I know ChatGPT and Replit, they're both really awesome for that.
Sherna Varayath 31:55
Wow, interesting. And this brings us to the end of another a powerful episode of the eLearning Champion podcast. Thank you so much, Gerry, for sharing your valuable insights, perspectives, and literal prompts that our listeners can really try out.
Gerry White 32:10
Oh, you're very welcome. Thank you for having me. I enjoyed it. Again, if anybody ever needs any help, feel free to contact me and reach out. I'll be glad to help.
Sherna Varayath 32:20
Absolutely. A gentle reminder to our listeners and viewers, do take a moment to share your knowledge and empower more champions in our field. Follow CommLab India on LinkedIn and send us your suggestions and feedback on your favorite social media platforms. We love hearing from you. Thank you so much for tuning in. Keep innovating. Keep inspiring. Thank you so much, Gerry. I really like the vibe coding part. I believe that was a very interesting section and will appeal to a lot of our listeners, especially students and IDs.
Here are some takeaways from the interview.
The real value of AI for an experienced instructional designer
In the education space with instructional design, AI can make instructional designers much more productive and much more effective at what they're building. What they could do in 10 hours, they can now do in 30 minutes, and what 10 of them could do, could be done by one person now. If they stay up to date with the technology, they'll be able to leverage their existing knowledge to build great design for their learners.
Empathetic reasoning, probably one of the most important things we're doing with AI right now, is a skill AI will not be able to do in the foreseeable future. Designing things with the learner in mind is very important. But now a designer can create different learning personas and run them through the content. Did this learner hit the mark? Did that learner hit the mark? What specific type of applications worked well with what type of learner and which learning persona? That helps personalize the experience for the learner.
How tools like Generative AI transform high leverage tasks
The prototyping would come first. We have templates for our presentations, whether it's a PowerPoint template, a Keynote template, or whatever. We can create whole templates for entire workflows with different types of assignments, whether scenario-based or writing-based. Just fill in the template, hit a button and it will spit out multiple types of assignments.
And as for the analysis piece, as you refine the learning personas you've created, you can run them through the gauntlet of these prototypes, and then fine tune them. People will not just be developing and analyzing, but they’ll be curating the content for different times. And you can create a system that can almost update itself, especially for new models. When you create the benchmarks that you want these to hit, how you're measuring the learning outcomes or the value of the prototypes, you can create analysis modules that you could run a whole workflow through, and it would give you metrics for different majors. All of this is going to save time, and will allow us to do a lot more.
Essential skills IDs must develop to ensure AI generated content is accurate, ethical, and pedagogically sound
As for content creation, creativity is still the most important thing, but prompts are how you're going to create everything. The language model is going to be the layer in the stack for every way we interact with computers. So keyboards will disappear, phones will disappear, it will all be through natural language processing. For the first time in maybe 2000 years, the question is more important than the answer. So how do you ask it to do exactly what you want it to do, to be very concise, very specific?
As far as ethics goes, that comes on both sides of the equation. You give attribution. If an AI creates a paragraph or an image, we want to make sure we give attribution to that, not just because it's the right thing to do, but to show learners that this is a great tool. It would have taken me 5 hours to do this in Photoshop. But now you can put things together much more easily. I want to show a student writing on a tablet, interacting with an AI on a computer screen in the classroom with the teacher up front, helping two students. And probably soon, the pictures can be live and personal to the learner. So you can write it in the assignment that it will know the learner, and it'll feed it an image so that it goes right with the learner. If they're a gamer, it shows it to them in a gaming analysis. But attribution is the key with that and on the back end, we must be careful. What we can do is make sure that we are creating everything in the assignment when the AI helps us, we give that attribution and to make it as personalized for that learner experience.
Leveraging AI to move from a one-size-fits-all training to adaptive and personalized learning paths for employees at scale
First off, you would decide what you want to use to build this experience. Are you going to build an agent or a GPD or a stack of agents throughout a process? If you're going to do a paper, you might build a brainstorming agent. And you could get the student to answer a few questions on the front end, like:
- What's your major?
- Out of these three things, which would you rather do?
Once you have a system built, you won't have to ask the students those questions because it will remember everything that that student has used in that program in the context. And it will serve up brainstorming techniques and strategies that fit with that student's life and the studies that they're doing.
For the peer review step, you could construct different peers, based on people they think they might work with once they graduate. And then run that draft of that brainstorming through that, and it offers suggestions on how to improve this to go to a rough draft.
Then you get to the drafting agent. Once they've written that, you can program that to do whatever you want. In writing it would be looking at the logic. Do I have basic things? Is my introduction contextual? Do I have a thesis? Do I have body paragraphs? Is it an argument? What type of argument is it?
Then the analysis part on the front end will take that in and offer suggestions for the learner. Then it moves on to the revision agent and there they'll change what they think they should. Hopefully there would be a teacher in the loop. But if not, you could create that agent on the very back end to check all the marks along the process and help them. We could construct algorithms to feed into the assignments, the learner’s hopes, wishes, subjects, topics of interest, strengths, weaknesses, all those things for personalization on a level that no one has even imagined before.
The top critical ethical and security considerations when using AI to generate proprietary training content
At the institution level, you could go with one client in one AI tool, Open AI, Perplexity, or Gemini, and have their own silo of that model. Then whatever doesn't get used in the training model doesn't get shared with the company as a whole. It won't be able to be accessed by anyone outside if you silo it off like that.
If you don't have that, you can check that your information is not used in the training data of these models. Don't put anything in a public demonstration or in a public talk. Ethically, it's just like it's always been with writing and taking code. You want to comment things out. If you're going to use someone's work, you ask them, unless it's already public. If it's already public, you give it attribution. But if you see an idea somewhere at a talk or on a podcast, especially if it's specific, contact the person.
The existing ID skill that will become more difficult for AI to replicate
Anything to do with measurement, data, or statistics is going to be done better by AI. It's going to be able to crunch the numbers better and visualize the data in any way we want. It could even serve up personalized data imagery so an ID can see what works best for them.
But AI can't do human to human skills like emotional intelligence, empathetic reasoning, that's the part that we don't want to lose.
AI is nowhere near as creative as a person when it comes to things like design skill, basic psychology, basic sociology, and trial and error with real people. It can do some things, but it can't create. It will tell you what you want, but it cannot have an idea by itself, and it can't do anything without you telling it what to do.
For example, you want to build an interview scenario with 3 different managers that work at AWS hiring entry level cybersecurity. One is a guy who had a good day. One is a guy who's been there for years. And then there’s another guy who's had a horrible day, this is the last thing he wants to do. A machine couldn't have come up with that idea. So you build these things and you run a person through the gauntlet of these interviews with all three different audience types. It's the creativity and person to person skills that we don't want to lose. But unfortunately or fortunately, if it has to do with numbers or just simple things that we would do repetitively, AI is always going to do a better job of that.
The first step an L&D team should take to transition into a sustainable strategic integration of AI across their ecosystem
IDs should look into vibe coding – using prompts, natural language to create systems. Then you're not limited by the tools you might have used, you can create a tool specific to what you're doing. A version of this is offered by ChatGPT, Gemini, and Replit. Creative AI is one such and you just start building the system by talking about it. It not only shows how quickly you can build something that's more scalable than anything we've built before, but it will also teach skills you didn't have before, like why or where in the stack does the database come? Where's the security layer? Where's the personalization layer? Where am I going to put that? And as you do this, whatever program you’re using, it will show you how it works.
So, IDs should check those out. If they have any trouble with that, they should just ask their model. For example, if they are in ChatGPT, ask what is the best solution using ChatGPT to vibe code a system to build for business majors students to do a SWOT analysis, and it'll pull the tool up. If they don't have that type of account, it'll tell what type of account and it'll tell them free resources if they don't want to get that type of account. It will guide them through.
We could build something in ChatGPT that IDs could use where it asks, What type of system are you trying to build? And it will personalize it that way. Dabble with it, don't be afraid of it. The quickest way to get comfortable with it is to talk to it and talk with it. Turn the voice mode on and just tap it. That's the way I use it. I'm constantly talking to my phone and as I'm writing or doing anything. “Go look this up. Go look that up. What's a better way to do this?” So play with all the tools, but ChatGPT and Replit are both awesome for that.

Learning Evolved: 25 Years of Insights for Modern L&D Leaders
The Journal to Redefine Corporate Training
