The Role of Artificial Intelligence in L&D: In Conversation with Jaime Bissa

Welcome to CommLab India’s eLearning Champion video podcast featuring Jaime Bissa. Jaime is a strategic leader and educator specializing in curriculum design, instructional technology, and professional development. The Founding Director of the Center for Learning and Innovative Pedagogy at Baptist Health Sciences University, Jaime focuses on enhancing teaching and student success. With a PhD in General Psychology and a focus on cognition and instruction, Jaime has held leadership roles in higher education and consulting, and is known for developing professional development ecosystems and aligning instructional strategies with institutional goals.
Click Here To Read Transcript
CommLab Podcast with Dr. Jaime Bissa
Sherna Varayath 6:15
Hello there and welcome back to the eLearning Champion Podcast where we dive deep into strategies, trends, and triumphs shaping up the world of digital learning.
I'm absolutely thrilled to have you tune in today. Are you ready to discover the role of artificial intelligence in L&D? In today's episode, we're doing exactly that, and I'm thrilled to introduce you to Jaime, our guest speaker for today. Hi there, Jaime .
Dr. Jaime Bissa 7:15
Hi, good morning.
Sherna Varayath 7:17
Good morning. So Jaime is a strategic leader and educator specializing in curriculum design, instructional technology, and professional development. Currently, the Founding Director of the Center for Learning and Innovative Pedagogy at Baptist Health Sciences University, Jaime focuses on enhancing teaching and student success. Holding a PhD in general psychology with a focus on cognition and instruction, Jaime has held leadership roles in higher education and consulting, and is recognized for developing professional development ecosystems and aligning instructional strategies with institutional goals. Welcome, Jaime, we're thrilled to have you here.
Dr. Jaime Bissa 8:04
Thank you.
Sherna Varayath 8:06
Before we dive in, make sure you're a true eLearning champion by hitting that Follow button wherever you're listening in from. All right, we'll get started.
So to start off and frame a landscape, Jaime, in your view, what misconceptions do people still have about the role of AI in learning and development, particularly in higher education?
Dr. Jaime Bissa 8:32
Well, I would say it's probably those misconceptions are pretty broad across other organizations as well, not just in higher Ed. I think there is that fear and concern that many people think AI is going to replace educators or that AI is just about creating a better workflow and efficiency. It's really a lot more. The reality is AI is a tool that when we use it thoughtfully and safely and ethically, is a way to intentionally enhance our creativity, specifically as it comes to learning and design. It's about enhancing our pedagogical intent as well. And it goes far beyond ChatGPT or Claude or Copilot. It's about adaptive learning, using it for predictive analytics, and seeing AI as our partner in learning. The danger itself is not in the tools. It's about using it critically and thoughtfully and educating users about the way that we do that.
Sherna Varayath 9:38
So how do you see the relationship evolving between human-centered pedagogy and AI enhanced instruction?
Dr. Jaime Bissa 9:54
In my opinion and in work that I've been doing with faculty and staff as well as the outreach work that I've been doing in a nonprofit, AI should extend and enrich the human aspects of our teaching and our learning and our development. Yes, it can automate routine tasks that streamline our workflows, and it can really parse learning as well. That's my focus, how do we use it to personalize learning, adult learning as well as learning of students, to be grounded in that relational, inclusive reflective pedagogy. The goal is to use it to enhance our learning, not to replace it.
Sherna Varayath 10:37
Right. So, coming to faculty and institutional readiness, what are the most important steps institutions should take to prepare faculty to use AI more meaningfully and not just as a shortcut or novelty?
Dr. Jaime Bissa 10:59
I take a strategic approach to implementation when it comes to AI use. There's a thought process out there that it's still just about the tools and training individuals on how to use the tools we have access to. But if we want to meaningfully create those relationships, the human and tech relationships, we need to focus on AI literacy first, that is essential. That is where I've started with our faculty, the initial steps of understanding the history of AI, when it really started, which was decades ago, how it's evolved, and then those inner workings. We know that AI was developed based on mirroring the neural connections in the human brain. It doesn't function quite the same, but it creates a great analogy for helping individuals understand how it works and how it uses pattern recognition to create those outputs. And faculty need a foundational understanding of what AI is and be able to align it with learning outcomes. At our institution, I'm currently providing what's called AI super user learning labs, which are very intentional labs for individuals who help spearhead using AI. And we're developing together to use in their courses. We have collaborative design sessions. There's also scenario-based trainings, and modeling the ethical use of AI.
Sherna Varayath 12:35
Very interesting. So how do you balance the innovation part with ethical and pedagogical responsibilities when introducing AI to the faculty members?
Dr. Jaime Bissa 12:52
It's important to anchor the use of AI and core values. And I think you have to look at your institutional values and you need to look at what are our core values as humans who are now existing with AI as a collaborator. And so focusing on equitable use of AI, focusing on using it with integrity and developing learner agency. We've already seen the research out there that demonstrates there is cognitive offloading with using AI, but that depends on how you're using it. It actually can help us think critically and creatively, but it comes back to how are we using those tools and how are we educating others to use those. And I think having cross functional governance, clear policies that are collaboratively developed on AI use, not just for students but for faculty, and having those pedagogy first frameworks. So for example, I've been working with our faculty on a course review process which integrates universal design for learning principles as well as AI integration, both in the course review and redesign, as well as AI into the learning activities for students.
Sherna Varayath 14:15
Wow, that's really interesting. So that sets the step for having AI as a design partner.So what more examples have you seen or developed where AI tools act as true co-designers in curriculum or even course development?
Dr. Jaime Bissa 14:34
One example right now in this course review and redesign process, I have personally developed 2 chat bots. It's just the start of many chat bots. I'm using Playlab AI 1 specifically, for example, to support the instructional design process. What I love about being this co-creator of these custom bots is that you can tailor it to the unique needs of a specific task, and you have a safe environment in which you're implementing that, you are selecting the audience who will be using that bot to train it. And so, for example, one of the bots that we're using with faculty is starting with breaking down and deconstructing programmatic and course standards and objectives. And then taking them through the process, through that bot after the deconstructing, breaking down the level of rigor, the knowledge and skills, mapping out learning objectives and learning outcomes, and then aligning tasks in differentiated tasks within that process. And it's been a powerful tool.
And what's even more interesting is I wait to introduce that to faculty. I take them through the actual process we as humans have to learn and use for course development. It's an incredibly powerful and cognitively demanding experience. Then I bring the tool in and initially you'll get the, ‘Well, why don't you just give this to us in the first place?’
Because in order to truly understand those processes that are essential for the learning and the development, you have to go through that first before interacting with the bot. Because it's a different kind of interaction, and because we are still the experts in the room, humans are. And we still have the knowledge and skills to be able to question as we're collaborating with AI. Are these appropriate outputs? Do they fit with our goals?
And if they don't, how do I then interact with the bot to get the results that I need? Some of the things that I've shared with faculty sometimes in the end, after I go through several inputs and outputs with any sort of AI tool, is I may end up throwing out the output and either starting all over or going back and revising my inputs to get the results that I want.
Sherna Varayath 17:09
So how can AI support universal design for learning, UDL principles, and promote more inclusive educational environments.
Dr. Jaime Bissa 17:26
Well, what's great right now is when you were embedding those principles, which are based on neuroscience and how we learn as humans, really grounded into the scientific concepts. But udl was also developed to make education opportunities equitable for students, to support for students who have learning differences, and when we use tools to support that, it can actually help us creatively brainstorm examples for differentiating content delivery, look at different ways of the learning environment, how we might inform our instruction. It can also help us check for accessibility. So for example, in an online course, there are a lot of tools out there already embedded that help us quickly check our accessibility to make sure we're following those laws and guidelines as well as to make it learner centered. But even if they give suggestions, it doesn't mean it's foolproof.
So currently working with an accessibility supervisor, there's still a lot of work behind the scenes because it may catch issues with accessibility, but it may not always have the solution that's needed because it's going to be highly dependent on the platforms that we're using and the tools that were provided in our roles. It also can help us brainstorm those multimodal approaches to learning. And the power of using AI is really in that design phase and to help anticipate diverse learner needs.
Sherna Varayath 19:06
Right. Truly, I think the design phase is somewhere AI can really help us contribute a lot more. So coming to AI literacy and student development, what does it mean to cultivate AI literacy in students, and how can educators model responsible AI use in their teaching?
Dr. Jaime Bissa 19:32
The keyword that you just shared, is the word model, right? It can't just be verbal instructions. We as leaders in the classrooms, leaders in our positions, have to ourselves model how to use that. And teaching students not to just use AI, but to critique it, to question it on its impact. I think this is especially important because we know that AI’s inputs, it's informed using biased information and it’s outputs can be biased.
So that's the other part of developing AI literacy, being able to critique those biases and to be able to craft and formulate appropriate prompts that will help bypass those biases, create better outputs that remove biases that it's getting its information from. I think for faculty and for staff, it's modeling that responsible use, creating that space for students to safely engage, to be creative, ask questions and create purpose.
What is our purpose? What is our outcome for using this? Should we be using it for a specific task or activity? Why or why not? And then using it to generate curiosity.
I think we're all running 100 miles an hour sometimes, and I've been very intentional about my Gen AI use in that I use it for very specific tasks, to give me a start to some of the projects. It helps outline skeletal frameworks for professional learning ideas.
But I'm still spending quite a bit of time customizing it and differentiating for learners in the room that has my content expertise and that flair. Because AI doesn't understand the people who are going to be in a room. It doesn't understand the students to be in the classroom, and that's what we as humans bring to that space.
Sherna Varayath 21:39
Right. So in your experience, how can AI tools be used to develop more deeper learning or creativity and resilience in students, and not just automate tasks?
Dr. Jaime Bissa 22:01
I think when used intentionally, and this is really key, AI can help you scaffold the iterative learning process. So, for example, students use AI to draft, revise, and compare perspectives, analyze different points of view in an argument, also promote metacognition and resilience with some low stakes practice. So right now I'm currently working in the AI super user learning lab with an academic coach who I talked about how to build a chatbot that will build in metacognitive questioning. So when students are studying or have questions, and want to reflect on their own learning processes to help them study better, it leads them through a series of questions in that metacognitive reflective practice.
Additionally, I'm working with faculty right now, focused on rewriting course content and AI has been really helpful with embedding processes in where students use it to analyze their writing drafts. You can upload rubrics and have AI use a writing rubric and then score and compare students’ writing to that rubric. What's powerful though, is that you don't use it for the end result, you use it for the learning. So, students use the output, they critique the output, they compare it with their own self critique of their work, or even their peers’ self-critique of the work. And they determine how they may or may not use that feedback from AI to improve their own writing. So again, it's how you're using that. It’s very methodical and avoids the cognitive offloading. You're getting students to think differently about their approach in learning.
Sherna Varayath 24:00
Wow. And the difference of having students to think in a different way, with the support of AI is really innovative and really good use of the AI tools. So coming to organizational implications, what governance or structural shifts are needed with institutions to support the sustainable and responsible AI integration?
Dr. Jaime Bissa 24:28
I think it's really going to depend on the institution, but in general, I think having clear guidelines and policies at the institutional level. Much of the initial conversation for faculty has started with the academic integrity issue. I've seen a lot of that as I’m sure you have as well, about how AI has been used by learners to impact academic integrity. At the same time, we've also seen it with faculty across institutions, how they've used it and potentially not checked that work, that output, that has led to plagiarism issues. So starting with that governance piece, with a policy, not just for academic integrity, but my suggestion is to also have a ‘faculty use’ policy, so there is standard guidance. And that's because faculty are just learning how to use AI. They may not be realizing that the way they're using the tools with students, especially when it comes to AI detectors, are proven by research to not work. They need guidance because we're being across the globe, lawsuits of students being falsely accused or there's grounding for we can't use these tools, but at the same time you're using the tools to try to catch if I've used the tools. It's quite the oxymoron but also, focusing, at least in the role that I'm in, it's from the academic standpoint, not just the IT standpoint.
So I currently work with Community of Practice faculty as well, we're collaboratively drafting these policies, these goals. Additionally, when it comes to individual courses, empowering faculty to have their own guidelines for AI use. What's acceptable, what's not acceptable, and why? So I've crafted documents that provide those tools to faculty so they can self-select based on their instruction and embed that into their courses.
Sherna Varayath 26:51
Wow, that's really interesting. So, how should educational leadership rethink their instructional technology roles in the light of AI's growing capabilities?
Dr. Jaime Bissa 27:07
Definitely, these roles cannot just be supported by tech support. In fact in my
experience, and if you go on LinkedIn or social media, education is one of the leading sectors that is leading AI innovation. And I think there are individuals who are not in education who don't see that or realize that.Did we initially create AI? In most cases, no.
But we are now. Because the tools are available and the field of education is spearheading this. And this is from all the way through university level. So the roles really need to focus on what does that look like for academic strategy?
Instructional technologists as well as instructional designers need to be integrated into those academic affairs processes to help guide the pedagogical implementation of AI. And again, it's not just about going back to the tools. We've all experienced digital tools in our education, in our implementation of instruction. This is a whole new level, I think that it's not going away, and we have to determine how we're going to use it, how we're going to evolve and adapt. So adaptability is crucial. But it needs to come from the academics part first with the support of IT and those other areas that are focusing on AI. So bottom line, interdisciplinary collaboration.
Sherna Varayath 28:44
OK, so coming to a focus on human qualities in the age of AI, which distinctly human qualities do you believe must remain central as we integrate AI into education and leadership development?
Dr. Jaime Bissa 29:06
It's interesting you asked that question. There are a number of frameworks out there that have been developed. I'm currently a founding member of the Human Intelligence Movement. And right now, we've a lot of research on developing our own framework that focuses on those human aspects of AI integration, and how do we go about supporting that. And a lot of our discussion at focus is first and foremost on emotional intelligence. That has definitely been at the forefront in education for some time, even in in the corporate sector as well, but it's definitely bubbled up as a very hot topic as it comes to AI.
So that emotional intelligence, cultivating empathy, ethical behaviors, being more adeptly focused on critical thinking. And what does critical thinking mean? What are those skills that count to critical thinking, the creativity as well as adaptability? Humans are typically not well versed in change for the most part. It's been a part of who I am. I really enjoy change. But what I have found, not just personally, but professionally throughout my career, is there is a fear of change, and change adaptability. And this is where those conversations around, Oh my gosh, is AI going to replace us? Not right now.
But we do have to adapt to what we're doing to support our learners. We already know learners coming to us are wired differently because they're digital natives. So, we have a whole generation that has now been officially born into the era of AI. They are never going to know a world without technology, let alone AI. That was not my world growing up. I saw the evolution of the cell phone, and the Internet, and Dial up, and here we are now in AI. So, these skills that we bring to humans, they can't be automated. Our role is to model these even more intentionally as we're working with learners and students and other professionals when we're training on AI. And I think that's really essential.
Sherna Varayath 31:38
Yes, absolutely. So how can we ensure that AI enhances and not replaces the relational and emotional aspects of our learning?
Dr. Jaime Bissa 32:07
So for me and through my work with others is that observation of let’s use AI to offload these administrative tasks, those tedious tasks that we all have to do that sometimes take hours of our week. That does help with workflow, being very intentional about which tasks we're offloading onto AI. So that way, educators and professionals can work on and focus more on their human interaction. So with collaborative professionals and instructors, as well as working with our students, creating more mentorship opportunities with that freed up time with AI, creating that human connection.
We must design our work for present, to elevate our human present, to create more trust, to elevate that human interaction. The use of AI is not the goal, but it supports the greater goal of this human connections. In one example, the professional learning sessions that I create and deliver in my current role as a Director is a hybrid combination. Sometimes it's virtual depending on how the content is delivered, but I made very clear that sessions are also in person because what we found is you need to be in that physical space together. You're more apt to collaborate, to look at someone else's screen and analyze their input, to see how they're using their inputs, to analyze their outputs out of those discussions.
And so I think for me, AI's brought me closer together. It can divide us if we're not using it correctly, but it's certainly brought us closer together as humans and the very first example I give of that is connecting with you from across the globe. I connected a number of individuals that I collaborate with around the globe with those organizations.
Sherna Varayath 33:54
Wow, that's quite an example. And that brings us to the end of today's insightful episode of the eLearning Champion Podcast. We've covered so much today from AI literacy and modeling and so many different aspects, I'm sure all our listeners will be walking away with a lot of insights and valuable strategies.
Dr. Jaime Bissa 34:25
Thank you for the opportunity today.
Sherna Varayath 34:28
Thank you so much Jaime for your time. And dear listeners, if you found value in today's conversation, please do share this episode with fellow eLearning enthusiasts.
Connect with us on your favorite social media platforms. We love hearing from you.
Thank you so much for tuning into the eLearning Champion podcast.Keep innovating, keep inspiring, and keep championing exceptional learning experiences. Until next time, happy learning.
Here are some takeaways from the interview.
How to prepare faculty to use AI more meaningfully
We need to take a strategic approach to AI implementation. It should not be only about the tools and training people on using those tools. We need to first focus on AI literacy to meaningfully create human and tech relationships – understanding the history of AI, when it started, how it has evolved – and then go into its inner workings. We know that AI was developed based on mirroring the neural connections in the human brain. Though it doesn't function quite like the brain, that analogy helps individuals understand how AI works and how it uses pattern recognition to create outputs. Faculty do need a foundational understanding of what AI is, to be able to align it with learning outcomes. At our institution, I'm currently providing AI super user learning labs for individuals who helped me spearhead using AI, where we're together modeling the ethical use of AI with collaborative design sessions and scenario-based trainings.
Balancing innovation with ethical and pedagogical responsibilities when introducing AI to faculty
It's important to anchor the use of AI and look at your institutional values and our core values as human beings existing with AI as a collaborator. We need to focus on the equitable use of AI, using it with integrity and developing learner agency. Though research has shown that there is cognitive offloading with using AI, that depends on how we are using those tools and how we are educating others to use them. It’s essential to have cross functional governance, collaboratively developed policies on AI use (both for students and faculty), and pedagogy first frameworks. For example, I've been working with our faculty on a course review process that integrates universal design for learning principles as well as AI, both in the course review and redesign, and integrating AI into the learning activities for students.
Examples of AI tools acting as co-designers in course/ curriculum development
One example is the course review and redesign process where I developed 2 chat bots, using Playlab AI 1, to support the instructional design process. The best thing about being a co-creator of these custom bots is that we can tailor them to the unique needs of a specific task, implement it in a safe environment, and train the bot on the audience who will be using that bot. For example, one of the bots we're using with faculty starts with breaking down and deconstructing course standards and objectives. It breaks down the knowledge and skills, maps out learning objectives and learning outcomes, and aligns differentiated tasks within that process.
For them to understand the processes essential for the learning and development, I first take them through the actual process for course development we as humans must learn and use. It's an incredibly powerful and cognitively demanding experience that must be done first before interacting with the bot. That’s because it's a different kind of interaction, and because we still have the knowledge and skills while collaborating with AI to be able to ask:
- Are these appropriate outputs?
- Do they fit in with our goals?
- If they don't, how do we interact with the bot to get the results we need?
After going through several inputs and outputs with an AI tool, we may throw out the output and start all over again or revise the inputs to get the results we want.
How AI can support UDL principles and promote more inclusion
UDL principles are grounded in scientific concepts, based on neuroscience and how we learn as humans. They also help make education opportunities equitable for students, and offer support to students with learning differences. When we use tools to support that, they can help us creatively brainstorm examples for differentiating content delivery, look at different types of learning environment, how we might inform instruction, and help us check for accessibility. For example, online courses have a lot of tools already embedded that help us quickly check accessibility to ensure it is learner centered. There is a lot of work behind the scenes to identify accessibility issues, but it may not always have the desired solution. That’s because it depends on the platforms and the tools we're using. It also can help us brainstorm multimodal approaches to learning.
How educators can model responsible AI use in their teaching
We as leaders must model how to use AI, and teach students not only how to use AI, but also to critique and question it on its impact. This is especially important because AI’s inputs may be based on biased information leading to biased outputs.
AI literacy is the ability to critique those biases and formulate appropriate prompts that will help bypass those biases and create better outputs. Faculty and staff should model responsible use, so students can safely engage and ask questions.
- What is our purpose?
- What is our outcome for using this?
- Should we be using it for a specific task or activity?
- Why or why not?
I use Gen AI for very specific tasks, to give a start to some projects, and help outline skeletal frameworks for professional learning ideas. But I still spend a lot of time customizing it for learners because AI doesn't understand the people who are going to be in a room. That's what we as humans bring to that space.
AI tools to develop deeper learning, creativity, and resilience in students
When used intentionally, AI can help scaffold the iterative learning process. For example, students can use AI to draft, revise, and compare perspectives, analyze different points of view, and promote metacognition and resilience with some practice.
I'm currently working in the AI super user learning lab on building a chatbot that will build in metacognitive questioning when students are studying or reflecting on their own learning processes by leading them through a series of questions in that metacognitive reflective practice. I'm also working with faculty on rewriting course content. AI has been really helpful with embedding processes where students use it to analyze their writing drafts for the learning (and not for the result). You can upload rubrics and have AI use a writing rubric to score and compare students’ writing to that rubric. Students critique the output and compare it with their own or peers’ critique of the work. Then they determine how they may or may not use the AI’s feedback to improve their own writing. This avoids cognitive offloading, and makes students think differently about their learning approach.
Governance to support sustainable and responsible AI integration
This will depend on the institution, but it’s important to have clear guidelines and policies at the institutional level. We've seen how AI has been used by learners to impact academic integrity. We've also seen how faculty across institutions have used it without checking the output, leading to plagiarism issues. So start with a governance policy, not just for academic integrity, and also have a ‘faculty use’ policy, because faculty are just learning how to use AI. They need guidance because across the globe, we’re seeing students being falsely accused for using these tools. At the same time, those very tools are being used to identify if they’ve used the tools. It's quite the oxymoron, but from the academic standpoint, not just the IT standpoint. When it comes to individual courses, faculty need to have their own guidelines for AI use on what's acceptable, what's not acceptable, and why.
Instructional technology roles in the light of AI's growing capabilities
Education is one of the major sectors that is leading AI innovation because the tools are available. So the roles really need to focus on what that looks like for academic strategy.
Instructional technologists and instructional designers need to be integrated into academic affairs processes to help guide the pedagogical implementation of AI. It's not just about going back to the tools. We've already experienced digital tools in implementation of instruction. This is a whole new level, and we have to determine how we're going to use it, how we're going to evolve and adapt. Interdisciplinary collaboration and adaptability are crucial, starting with academics with the support of IT and other areas focusing on AI.
Integrating AI into education and leadership development while keeping human qualities central
I'm currently a founding member of the Human Intelligence Movement, doing research on developing a framework that focuses on the human aspects of AI integration, and how we can support that. Emotional intelligence has been at the forefront in education and the corporate sector for some time, but it's become a very hot topic when it comes to AI – emotional intelligence, cultivating empathy, ethical behaviors, and being more focused on critical thinking. And the skills that count to critical thinking, the creativity and adaptability. Humans are typically averse to change. There is fear of change, leading to fears of AI replacing humans. But we do have to adapt what we're doing to support our learners. The skills that we bring as humans can't be automated, and it’s essential we model these skills intentionally when training learners, students, and other professionals on AI.