This article is part of a series on the future of instructional design in the age of GenAI. The series explores how instructional designers can move beyond ad hoc prompting toward a more disciplined, challenge-based human–AI working method.
AI in instructional design is moving fast. Today, instructional designers use GenAI tools to summarize SME inputs, generate assessments, draft learning objectives, recommend visuals, and accelerate storyboarding. The efficiency gains are undeniable.
But there is an emerging concern beneath all this productivity.
When AI becomes only a support tool, instructional designers can gradually slip into passive workflows — prompt, accept, refine slightly, move on. The output may look polished, yet the depth of instructional thinking behind it may weaken.
That is where a more deliberate approach to AI prompting and prompt engineering becomes essential.
Instead of using AI only to generate content, designers can use it to challenge assumptions, expose weaknesses, and sharpen instructional decisions. This shift transforms AI from a helpful assistant into a thinking partner that improves judgment, not just speed.
In this blog, we’ll explore how challenge-based AI prompting can strengthen instructional design, why “nemesis prompts” matter, and how prompt engineering can help instructional designers become more critical, reflective, and effective in the age of AI.
Table Of Content
- Why AI in Instructional Design Shouldn’t Be Limited to Content Generation?
- What Are Nemesis Prompts in AI Prompting?
- How AI Prompt Engineering Strengthens Critical Thinking?
- What Different Forms Can Nemesis Prompting Take?
- Why Productive Friction Matters in AI Prompting?
- When Should Instructional Designers Use Nemesis Prompts?
- The Future of AI in Instructional Design Is Critical Prompting
Why AI in Instructional Design Shouldn’t Be Limited to Content Generation?
Most instructional designers are beginning to see where GenAI helps.
It helps summarize messy SME content.
It helps draft objectives.
It helps generate assessment ideas.
It helps propose scenarios, visuals, and narration.
All of that is useful. But there is a problem.
Most of this use still assumes that AI’s role is to be helpful. Helpful in simplifying. Helpful in generating. Helpful in organizing. Helpful in accelerating.
Again, fair enough.
But if AI only plays a supportive role, something important is lost. The instructional designer can slowly slip into a passive pattern: ask, receive, refine lightly, move on. The output may improve. The speed may improve. But the designer’s critical engagement with the work may actually weaken.
What Are Nemesis Prompts in AI Prompting?
That is where a different kind of prompting becomes important.
I would call these nemesis prompts.
Not because AI becomes an enemy. That is not the idea. The point is to give AI a temporary adversarial role inside the design process so that the designer is forced to think harder, justify choices, spot weaknesses, and defend instructional decisions more rigorously.
In simple terms, a nemesis prompt asks AI not to help you complete the work, but to challenge the work you have done.
That distinction matters.
Because good instructional design is rarely ruined by lack of activity. It is more often weakened by lack of scrutiny. The flow seems reasonable, so nobody questions it. The objective sounds polished, so nobody checks whether it really describes performance. The scenario feels realistic, so nobody asks whether it actually drives the intended decision-making. The assessment item looks fine, so nobody notices that the distractors are weak or the feedback teaches nothing.
How AI Prompt Engineering Strengthens Critical Thinking?
This is exactly where GenAI can be useful in a different way.
Not just as a generator. As a critic.
That is the central case for nemesis prompts. They create a form of productive friction inside the human–AI interaction. Instead of letting AI remove effort at every turn, they reintroduce the kind of challenge that sharpens instructional judgment.
This matters more than many people realize.

That is why GenAI should not always be asked, “What should I do here?”
Sometimes it should be asked, “What is weak in what I have already done?”
That is a much more powerful question.
What Different Forms Can Nemesis Prompting Take?
A nemesis prompt can take several forms. These approaches demonstrate how AI prompting can become a strategic instructional design practice rather than a simple productivity shortcut.
1. Devil’s Advocate Prompting
Here the AI is asked to challenge the design directly.
Examples:
- What is weak in these learning objectives?
- Which of these screens may confuse learners?
- What assumptions am I making in this storyboard?
- Why might this interaction fail instructionally?
This approach pushes the designer beyond surface satisfaction and encourages deeper instructional analysis before development begins.
2. Counter-Alignment Prompting
This technique is especially useful in objectives and assessments.
The AI is asked to identify where alignment is weak or misleading:
- Which assessment items do not truly test the stated objective?
- Which objectives sound valid but are not performance-based?
- Where does the storyboard drift away from the business need?
Alignment problems in eLearning are often subtle. The language may appear professional, but the instructional logic underneath can still be weak. This type of AI prompting helps expose those gaps early.
3. Counterfactual Prompting in AI Prompt Engineering
Counterfactual prompting asks AI to explore the opposite or the edge case.
Examples:
- What if this learning sequence is wrong?
- What if this scenario is too easy for the audience?
- What if learners misinterpret this visual?
- What if this summary has oversimplified a critical concept?
Counterfactuals are valuable because they break the illusion that the first reasonable design choice is automatically the best one.
4. Red-Team Prompting for Instructional Design Reviews
Here AI is asked to attack the design as an external reviewer.
Examples:
- Audit this storyboard like an independent instructional designer who disagrees with it
- Point out where this course is likely to become text-heavy
- Identify the three most serious weaknesses in this assessment set
- Critique this narration for duplication, vagueness, or cognitive overload
This is one of the strongest uses of AI in instructional design because it completely changes the role of AI. Instead of functioning as an assistant, AI becomes a reviewer.
5. Weak-Option Detection Prompting
This technique is especially powerful in assessment and interaction design.
Examples:
- Which answer option is implausible and therefore too easy to eliminate?
- Which distractor teaches the wrong lesson?
- Which scenario branch adds activity but not learning value?
- Which visual option is attractive but instructionally weak?
A lot of ineffective eLearning survives simply because it looks interactive without actually improving understanding. This type of prompt engineering helps instructional designers detect shallow interactivity before launch.

Prompt Engineering — Harnessing AI to Create Immersive Learning Experiences
Learn the Art of Crafting Effective Prompts and Get Desired Outcomes
- What is Prompt Engineering?
- Why is Prompt Engineering Important?
- How to Craft Effective Prompts?
- And More!
Why Productive Friction Matters in AI Prompting?
What makes nemesis prompts valuable is not that they are clever.
It is that they restore challenge to a process that can otherwise become too frictionless.
And friction, used properly, is not the enemy of learning.
It is often the condition for it.
That applies not only to learners, but also to instructional designers using AI.
Strong instructional design depends on reflection, evaluation, and critique. If AI removes all struggle from the process, designers risk becoming operators instead of thinkers.
This is why thoughtful AI prompting and prompt engineering matter so much. They preserve the designer’s active engagement with the work instead of replacing it.
When Should Instructional Designers Use Nemesis Prompts?
Nemesis prompts must be used carefully.
They are useful in draft stages, review stages, and critique cycles. They are not useful when the designer has not yet understood the SME content well enough to evaluate the challenge. And they should never introduce risky ambiguity in areas where factual precision is non-negotiable, such as compliance, safety, technical procedures, or regulated content.
They are not a license for AI to become erratic or misleading.
They are a disciplined mechanism for strengthening review.
Used badly, they can create noise.
Used well, they can build judgment.
The Future of AI in Instructional Design Is Critical Prompting
Most GenAI use in instructional design today is built around assistance and assistance does matter. But assistance alone is no longer enough. If organizations want stronger instructional designers, not just faster outputs, they need to build challenge into the human–AI relationship. That is where the next evolution of AI in instructional design begins: not with smarter AI prompting alone, but with critical prompting and challenge-based prompt engineering. Nemesis prompts remind us that AI should not only support thinking; it should also test it. Used well, they do not make AI less useful. They make the instructional designer more alert, more reflective, and more awake.
Next in the series: AI as Thinking Partner, Not Content Machine.

