Skip to content

The Ethical Aspects of AI in L&D: A Conversation with George Alyn Kinney

ai-in-lnd-ethical-aspects

Welcome to the eLearning Champion Podcast, featuring George Alyn Kinney.

George is a passionate educational professional with over 15 years of experience in a variety of global corporate education roles. He builds teams centered around design thinking and problem solving, and is skilled at enriching aspiring leaders and managing performance based metric driven teams. George has a Bachelor’s in Philosophy, and is now Senior Manager of Learning and Development at T-Mobile.

Click Here To Read Transcript
CommLab Podcast with George Alyn Kinney

Sherna Varayath 0:23
Hello, listeners. Welcome back to the eLearning Champion Podcast, where we dive into the strategies, strengths, and triumphs shaping the world of digital learning. In today's episode, we are going to question a very interesting aspect of our future, the ethical side of artificial intelligence in learning and development. And to talk about this at length, I'd like to introduce our guest for today, George Alyn Kinney. Hi, George.

George "Alyn" Kinney 1:05
Thanks for having me on.

Sherna Varayath 1:07
So before we dive in, make sure you never miss our episode by hitting the follow button from wherever you are listening to us from.
Let’s learn and grow together.
So George, to start with, could you tell our listeners a little bit about yourself?

George "Alyn" Kinney 1:29
I've been in learning development for about 20 years in a variety of different roles, from facilitation, through instructional design, I've done some coaching and leading teams. Before I got into education, I got a bachelor’s in philosophy. So ethics has always been a passion of mine. I also worked at Google for about a decade in which time I got really interested in AI as it was developing. Now I'm a Senior Manager of Learning and Development at T-Mobile and getting to apply in real time a lot of the things that I've learned over my career, and also just being fascinated with ethics and AI in my day to day job. So that's great that I get to pull all these things together. But all of my opinions I might talk about today, I have to say, are my own and not
T-Mobile's Go Team agenda, but yeah, super stoked to talk about this today.

Sherna Varayath 2:22
Thank you for sharing the details. I am sure the decades of experience you bring has a lot of worth and value and insights for all our listeners. So as AI integrates into learning and development, what are the basic ethical considerations that one should think about?

George "Alyn" Kinney 2:54
No matter what company you're working for, they have their own code of business conduct or corporate values that drive your ethics. That's a great place to start. However, most companies didn't consider AI when they were establishing their values and roles and norms and things. So there's a little bit of extra work for us to do in learning and development, especially around AI. I think the first thing you could do is establish a core ethical guideline or a core set of conduct, specifically for AI and L&D. That sounds hard, but it doesn't have to be anything fancy, thinking through a human centric approach in design and implementation of learning solutions, whether they're AI or not. Some learning departments just go for an eLearning intervention, and they always think about things. Now that things are changing, it's a good time to reevaluate and think about a very principled approach to what you might do, making up commitments to fairness, equity, inclusion, setting clear policies regarding data privacy or security, and being transparent about everything you're doing. Generative AI specifically is still so new that I feel we're encountering new dilemmas every day. And it's important to foster a culture of ethical awareness and continuous learning.

Sherna Varayath 4:29
So what are the biggest challenges that you have encountered so far?

George "Alyn" Kinney 4:38
The biggest one I hear is fears of the changing role of L&D professionals. I think this is an ethical challenge, specifically, job displacement. Frankly, for my whole career, that's been a concern. Back when I was a facilitator, people were worried that eLearning would replace facilitated learning classes. And that was true to a certain extent. There are less onboarding classes, we rely more on eLearning. A lot of those folks became instructional designers who used to be facilitators, me included.

You don't need as many people to create these things as you used to, engineers and all that other stuff. It's a very real concern. Another big one that I've noticed is concerns about bias impacting fairness through learning experiences. That sounds very high level, but this comes up very concretely through a sort of AI divide. What I mean when I say that is that some companies have AI, some don't. Some departments have AI, some don't. Some individuals have established it somehow, and that's a big challenge in terms of ethics, because there are new capabilities unlocked by having access to these tools. And it has a huge impact on people's careers, whether or not they have access. But personally, I worry more about over reliance on AI. That's kind of the flip side of that. It’s worrying because it can potentially diminish critical thinking skills. And that's so important.

Sherna Varayath 6:24
Absolutely. When we talk about the collection and use of employee data by artificial intelligence in L&D, what are some ethical considerations to be careful about?

George "Alyn" Kinney 6:38
Well, there's several different aspects to that. This is a good question. I think one aspect is deepfakes, a piece of employee data that your likeness, your picture, who you are, that I had to deal with very specifically as an ethical dilemma in my job because we brought up Synthesia, which is a great platform we definitely use. But a whole series of questions from people, we were sort of making into these personal avatars, whether or not their likeness will be kept, or if we'll keep using their face long after they've left the company, all these questions. And so I think having a principled approach to start with, we were able to think through. There's the legal side of things and signing a release is great for that. But also above and beyond that, letting them know the data schedules that we might have, of saying OK, we're going to get rid of this at such and such a time.

We didn't used to do that. If we had a film for new hires or something, and we had a welcome message, there was no guarantee that we might delete that welcome message after so many years. But with AI, because it can change the message, it's very important to give those kind of reassurances to people. But above and beyond that, employee data, likeness, people are very emotional about it, but it's not the most common one I recommend to. It really goes around, collection of data and I think it's all this information that we've never had access to in the past. Many corporations can see exactly what you're putting into generative AI, and that includes inside of learning experiences. If it's a module around coaching or something like that, it could be performance or your learning styles or your sentiments toward the company, all this information is divulged without necessarily knowing that they could be seen and doing these things. So it's a worry of mine that if we're not careful about specifically blocking that, that information could be used like an environment that's meant to be training. People are supposed to make mistakes there, that's the intent. It's my worry that that data may be used for other things, and we've seen this in the past in training. I remember with Myers Briggs. I don't know if you're familiar with that one. It was one of those personality type quizzes that used to be used at a certain time for making decisions around hiring and who might be in management or not. This is way more data than Myers Briggs would give you. So there's a potential for that data to be used in a way that users aren't prepared for. So thinking through some of these ethical considerations of how we want to approach this before we come to the problem is really important for us in learning development, specifically to think above and beyond what's required from us by IT and other sources.

Sherna Varayath 9:14
Interesting, yes. So how can learning and development professionals proactively identify and mitigate bias in AI powered tools?

George "Alyn" Kinney 10:09
The first thing that we can do as L&D professionals is to understand the sources of the bias. We know that Gen AI can be biased. And it’s the normal classical ways that humans have been biased in the past, right? If you go looking for some stock art to put inside of your learning, it's something we have to be very conscious of, OK, we want to represent our audience well, and provide the same level of diversity of employees in our training that you find in the company. That way, no one feels left out. It feels weird if everyone is the exact same race and gender inside of your training, especially if you're someone who's not that race and gender. Above and beyond that, I actually saw a tweet recently. I don't know if they call them tweets anymore. But anyway I saw a tweet where someone asked for only a yes or no response from AI for a hot question: Is affirmative action racist? Yes or no?

That's a logical fallacy called a false dilemma, and the user is basically forcing the AI into a particular answer that might have more nuance than it actually should have. So users in L&D, specifically for analysis, when you're going about creating a training product or evaluation on the back end of it, you have to be really careful not to phrase your prompts in such a way to lead to these sort of biased answers that are going to lead you down a road that might give you incorrect information. But how do we stop that? It's hard with individuals, right? So we have to think about our own training, how L&D professionals are trained. But in addition to that, it's establishing internal review processes and governance to assess any kind of possible bias before deployment. It's kind of the classic ‘trust, but verify’. So, you trust your teams. You trust L&D professionals to make the right decisions, but it's important to share and understand the prompts and the data that was used to create these things. And it's continuously monitoring AI performance for differential impact on different groups of learners, so you'll be able to see on the back end too if some kind of bias is seeping into your design processes. That could be through surveys or could be through qualitative research. But what I'm noticing is that AI designed for particular circumstances tends to be a little bit different in how it's utilized, and that's something that you should keep in mind too while you're designing your training. For example, I've noticed that AI deployed to front lines such as in a call center or retail environment tends to be much more utilitarian, whereas in corporate roles, it tends to be much more broad based and can do all kinds of things that would be useful for career development and less about just answering questions. So thinking through all these things beforehand is very important to see some of these dilemmas around bias before you reach them.

Sherna Varayath 13:25
Right. How do you see the role of L&D professionals evolving ethically as AI takes on more tasks?

George "Alyn" Kinney 13:42
Oh gosh, yeah, that's a day to day question for me. I think it's a whole new way of thinking about our jobs in learning and development. We're becoming ethical stewards who need to watch out for alerts. In a way, we always have been right. We need to shift from a thought process of content delivery to curation and managing AI systems, and focusing on these higher order human angles. And I think that's important because we have to take on that responsibility to understand how the AI tools function and their potential impact. So if we're shifting to this new mindset, really we're stewards. Let's say the content developers, if we're coming at it from that stewardship mindset, we're going to be needing to think about what would be best for our learners, not only from a corporate context in terms of what's most impactful in their job, the same way we always have, but also advocate for the fairness of practices, advocate for everyone getting fair access to AI in similar ways, teaching them how to use AI best to get the best answers. I'm already saying that some people who have been using AI for the last 3 years since ChatGPT went more commercial, those people have a huge, huge, huge advantage in terms of the quality of their prompts and how they use the tools to do their day to day life. And the folks who are just adopting it, maybe they're struggling a little bit in terms of like how do I even ask a question, how is this different from a search engine, and all these other different things. Focusing on developing uniquely human skills is also incredibly important, that AI just can't replicate to drive the business forward. So that's important too.

Sherna Varayath 15:21
What are the ethical implications of using AI for evaluating employee performance and making decisions about career progression?

George "Alyn" Kinney 15:50
I think there's always been a risk there of the misuse of assessments. I talked earlier about the Myers Briggs and how that used to be used for promotions and all these things. We could potentially have the same difficulties with using AI in assessments or in personalized learning, especially when it's a branching, personalized learning situation.

There has always been that risk there, but that could be just targeting regional differences, for example, or in learner answering questions in unconventional ways that are still correct. If we have prewritten rubrics, even if we put into AI, we may not be able to necessarily account for all those situations, especially when we're deploying these solutions to really large audiences. So to me, there's an ethical line between using AI for just learning recommendations, like Netflix, oh, you like these kind of courses, so we're going to present you more courses like that. I think that's how we've used these things in the past versus using it for high stakes, evaluative decisions. So we could use a recommendation engine to suggest courses. But is it right to have it be a gatekeeper for when an employee is ready to take a management course, for instance, and then move up through the chain? Maybe that should be more of a human decision with more qualitative factors beyond just the kind of interactions they have inside of the LMS or just the kind of courses that they're taking. So that's one thing I think about in terms of performance and how that might factor into that. You're seeing a lot more integration of tools, especially on the talent and development side of things. I'm seeing AI tools being integrated into the talent marketplace, inside of hiring decisions, and trying to tie that into past learner behavior. And that to me is really worrisome. There really needs to be human oversight and judgment in any kind of critical decisions, especially when it's around a job, especially when it's around key opportunities for training, especially when it's informed by AI.

Sherna Varayath 17:45
Yes, there was also a recent update about Skills 360 by Microsoft, which was in line of integrating skill related data into your corporate environment. That created quite a buzz.

George "Alyn" Kinney 18:23
You know, it’s funny as an L&D person, I am super excited about that. It’s going to make so much of our job so much easier, but simultaneously, you got to think ahead to what could potentially happen.

Sherna Varayath 18:32
Right. So considering the rapid advancements in artificial intelligence, what steps should L&D professionals take to stay ahead of the ethical curve?

George "Alyn" Kinney 18:51
Yeah. This is something I think about a lot. I think some people frame ethics in AI as a yes or no question and that's really not correct. You don't have to be for or against it altogether. I think that's the wrong approach. You have to be very principled about adopting it and think critically about not just the immediate impact of AI that’s going to happen, but second and 3rd order impacts of using AI for any given learning activity. I think an example of that, gosh, this is going to date me. But when I was first joined the workforce, email was just being adopted. And in the workplace where I was at, there were these grumpy people making arguments, that email would be just awful, and we should just not use it.

Sherna Varayath 19:41
Yes, I have heard that.

George "Alyn" Kinney 19:44
Yeah. Actually some of these were really easy to fix. I remember I had a manager who was like, well, what if someone accidentally emails the whole company? Not a big deal, it's super easy for it to block that kind of thing. But there are other considerations like, what if email totally destroys your productivity? And I would say that one was arguably correct. Too much email can be disastrous to your productivity anyway. All that example to say that we've been through this sort of thing over and over again in business where we're adopting new tools that change the way we work from a past model. And I think had we been more thoughtful about how we approached email or the Internet or any of these other tools, we may have come up with different decisions that would, at the end of the day, make a better workplace for us. But just like anything else, we should be educating ourselves about AI fundamentals and emerging capabilities like Agentic AI, definitely on the near term.

Some people are starting to adopt that and thinking about really engaging in the dialogue about AI ethics within their organizations and the broader L&D community, people in IT are already thinking about these things.

How will agents interact with each other?

What happens if a manager has an agent that is running faster than someone else's agent that's trying to get something done?

Who should get the task done first?

When two disagree with each other, which one should win?

Interoperability is really important. And highlighting these sort of AI implementations, especially in learning development, being the first to raise our hand and saying, yes we need to try that here first, helps us be a better guardian for our learners and really trying to engage with the policies and the people, with IT and HR, all these different groups working together to really have a critical eye from the start on the whole employee experience.

Sherna Varayath 21:44
Wow, OK. So finally, how can our listeners get in touch with you?

George "Alyn" Kinney 21:58
Yeah, well, you can find me on LinkedIn as George Alyn Kinney. Alyn is my middle name, A-L-Y-N. That's the best way to reach me. You can also write me or subscribe to my newsletter on substack. That's Alyn.substack.com.

Sherna Varayath 22:15
Thank you so much for sharing the details for our listeners. I am sure you will get some enquiries and messages for sure.

George "Alyn" Kinney 22:23
Oh, great. Now I'll have to manage my email.

Sherna Varayath 22:28
Yes. That brings us to the end of another insightful episode of the eLearning Champion podcast. We've covered a lot today from dilemmas and ethical side, and listening to stories from George. I am sure everybody has a lot to take away. Thank you so much, George, for sharing your insights.

George "Alyn" Kinney 22:49
Thank you. Happy to be there.

Sherna Varayath 22:51
Thank you. I hope you are walking away with some actionable steps to evaluate the ethical side of AI in L&D. remember becoming an eLearning champion is a journey of continuous learning and sharing. If you found value in today's conversation please do share this episode with a fellow eLearning enthusiast or a colleague. Do reach out to us on your favorite social media platforms. We love hearing from you. Until next time, take care and happy learning.

Here are some takeaways from the interview.

Ethical considerations of integrating AI into learning and development

Every company has its own code of business conduct that drives their ethics. However, most companies did not consider AI when establishing their values and norms. So the first thing we should do is establish a core ethical guideline specifically for AI and L&D. It doesn't have to be anything fancy. It may simply be a human centric approach in design and implementation of learning solutions, whether or not AI is involved. Now is a good time to reevaluate our approach to what we do, making commitments to fairness, equity, inclusion, setting clear policies regarding data privacy, and being transparent about everything. It's important to foster a culture of ethical awareness and continuous learning.

The biggest L&D concerns

The biggest concern is the changing role of L&D professionals, specifically, fears about job displacement. In the past, when I was a facilitator, people were worried that eLearning would replace facilitated learning classes. And that proved to be true to some extent. A lot of facilitators, including myself, became instructional designers.

Another concern is about bias impacting fairness in learning experiences. That comes up very concretely through a sort of AI divide – some companies have AI, others don't; some departments have AI, some don't. That's a big challenge in terms of ethics, because there are new capabilities unlocked by having access to these tools. Whether or not they have access to AI has a huge impact on people's careers.

Also, over reliance on AI can potentially diminish critical thinking skills.

Ethical considerations on the collection and use of employee data by AI in L&D

There are several aspects to that. One concerns deepfakes of a piece of employee data, their likeness or picture, who they are. For example, Synthesia is a great platform that we use. But there were a lot of questions from people we were making into personal avatars, if we'll keep using their image after they've left the company. So, start with a principled approach, clear the legal side of things, and let them know our data schedules.

In the past, if we had a welcome message for new hires, there was no rule for deleting that welcome message after x years. But because AI can change the message, it's important to reassurance people. We never had access to all this information in the past. Many corporations can now see exactly what you're putting into generative AI, including the learning experiences. If it's a module around performance or learning styles, information could be divulged without users necessarily knowing that they could be seen doing things. People are supposed to make mistakes there, that's the intent. But if we don’t specifically block that information, there's a potential for that data to be used in a way that users aren't prepared for. So we need to think through some of these ethical considerations of how we approach this in learning development, beyond what's required by IT and other sources.

Identifying and mitigating bias in AI powered tools

As L&D professionals, we need to first understand the sources of the bias. It’s a fact Gen AI can be biased in the classical ways that humans have been biased in the past. Stock art used in the eLearning should provide the same level of diversity of employees in training that is in the company. It feels weird if everyone is the same race and gender in the training, especially if you're not that of race and gender.

I saw a tweet recently, where someone asked AI a question: Is affirmative action racist? Yes or no?
That's a logical fallacy called a false dilemma, where the user is forcing the AI into an answer that might have more nuance than it should have. So L&D professionals, when creating a training product or evaluation at the end of it, must be careful not to phrase prompts in such a way that they lead to biased answers that might give you incorrect information.

But how do we stop that? We must think of how L&D professionals are trained, and establish internal review processes to assess any possible bias before deployment. It's the classic ‘trust, but verify’. Trust L&D professionals to make the right decisions, but also understand the prompts and the data that was used to create these things. It’s necessary to continuously monitor AI performance for its impact on different groups of learners, or if some kind of bias is seeping into your design processes. That could be done through surveys or qualitative research. You should also remember that AI designed for different circumstances tends to be a little different in how it's utilized. For example, AI deployed to a call center or retail environment tends to be more utilitarian, whereas in corporate roles, it tends to be broader, not just about answering questions.

How the role of L&D is evolving ethically as AI takes on more tasks

Learning and development professionals are becoming ethical stewards, watching out for alerts. We should shift from content delivery to curation and managing AI systems, and focus on higher order human angles because we need to understand how AI tools function and their potential impact. If we develop that stewardship mindset as content developers, we'll thinking about what would be best for learners, not only from a corporate context of what's most impactful in their job, but also advocate for the fairness of practices, for everyone to get fair access to AI in similar ways. People who have been using AI since ChatGPT went commercial have a huge advantage in terms of the quality of prompts and how they use the tools. Those just adopting AI are struggling a little. It’s also important to focus on developing uniquely human skills that AI can't replicate to drive the business forward.

Ethical implications of using AI to evaluate employee performance and deciding on career progression

There's a risk with using AI in assessments or in personalized learning, especially for a branching, personalized learning situation that could be targeting regional differences, or in learners answering questions in unconventional ways that are still correct. Even if we put prewritten rubrics into AI, we may not be able to necessarily account for all those situations, especially when deploying these solutions to large audiences. So, there's an ethical line between using AI just for learning recommendations, like Netflix, versus using it for high stakes, evaluative decisions. We could use a recommendation engine to suggest courses. But is it right to use AI as a gatekeeper for when an employee is ready to move up through the chain? That should be more of a human decision with more qualitative factors beyond the interactions in the LMS or the kind of courses they're taking. It’s worrying that we’re seeing a lot of integration of tools, especially in talent and development and hiring decisions, trying to tie that into past learner behavior. There should be human oversight and judgment in critical decisions, especially around a job, or key opportunities for training, especially when it's informed by AI.

How L&D professionals can stay ahead of the ethical curve

Framing ethics in AI as a ‘yes or no’ question is the wrong approach. You must think critically, not only about the immediate impact of AI, but also the 2nd and 3rd order impacts of using AI for any learning activity. When I first joined the workforce, email was just being adopted. And there were people arguing that we should just not use it because of the problems it might cause. So, this is not the first time we’re adopting new tools in business that change the way we work. And had we been more thoughtful about our approach to email or the Internet or other tools, we may have made different decisions resulting in a better workplace for us.

L&D should be educating itself about the fundamentals of AI and emerging capabilities like Agentic AI. There’s already a lot of dialogue about AI ethics within organizations and the broader L&D community.

  • How will agents interact with each other?
  • What happens if a manager has an agent that is running faster than someone else's agent that's trying to get something done?
  • Who should get the task done first?
  • When two disagree with each other, which one should win?

Interoperability is important, and highlighting such AI implementations, especially in L&D, will help us be better guardians for our learners.

podcast-promo-1

adaptive-learning-lnd-pros-guide

Adaptive Learning 101

Learn How to Unlock Personalized Training at Scale and Speed