OpenMCQ: Building a Better Quiz with Open Research and Generative AI
Designing Better Classroom Questions with Open Research and AI

Generative AI tools represent an extraordinary opportunity for those without software development skills to build powerful software tools themselves through natural language.
I’m not talking about the much-debated “vibe coding” (although that development and debate remain very interesting to watch), but rather using tools like ChatGPT, Claude, and Gemini. With these tools, you can create customized bots or apps trained on natural-language documents to perform specific tasks.
Before the Frontier Learning Lab was founded, our team at the Montana Digital Academy was already exploring how AI could streamline and enhance the creation of educational content. Today, after sharing this tool in dozens of presentations, we’re excited to officially release OpenMCQ, a custom GPT designed to help educators quickly generate high-quality multiple-choice questions from any source material.
What is OpenMCQ?
Think of OpenMCQ as an expert assessment coach disguised as a chatbot.
It fills a critical gap. When you ask a general chatbot (like ChatGPT or Gemini, or even a school-specific AI tool) for a quiz, you cannot verify if it follows assessment best practices. For example, general models often generate “all of the above” options or confusing negatives, techniques that research explicitly discourages.
OpenMCQ stands apart because we built it on a foundation of open-access and openly licensed educational research. We drew its training data from a curated bibliography of effective assessment and item design, including works from Yale’s Poorvu Center and the University of Waterloo.
The output is more than just generated questions, as it filters every output through these evidence-based standards.
Key Features
Customizable Output: By default, OpenMCQ generates 10 questions balancing recall-level and higher-order thinking. You control the parameters: specify grade level, reading level, or a focus like “critical thinking only.”
Radical Transparency: Uniquely, OpenMCQ explains why it wrote the question. It categorizes Bloom’s level and analyzes the quality of the distractors, turning the “black box” of AI into a glasshouse.
Open Architecture: We call it “open” because we trained it using open-access journals and OER resources. We respect intellectual property; the model relies on actionable advice derived from cited sources, not on the sources themselves being uploaded. A full bibliography appears in the document linked below.
Available at No Cost: OpenMCQ is available in the GPT Store. Free ChatGPT users can access custom GPTs with limited access per day. A paid plan is required to create/publish GPTs.
How to Use It
Provide your source material (text, article, or resource). You can use PDFs, Word documents, transcripts from videos or lectures, or even copy and paste information into the document.
Specify your specific needs (e.g., “5 questions for 9th-grade geography students on African physical geography” or “focus on critical thinking”). Constraints will create more specific outcomes, such as Bloom’s levels, DOK levels, or even specific standards to measure against.
Review and refine. OpenMCQ minimizes errors by working with your provided text, but always review the output, just as you would with any AI tool.
A Demo
I asked OpenMCQ to create questions on the Wars of the Roses, using the Wikipedia article on the subject as the source. The output illustrates the difference between a standard ChatGPT prompt and a research-backed tool.
Here is the output:
Question 2
What event marked the beginning of the Wars of the Roses in 1455?
Answer Options
A) Battle of Towton
B) Battle of Bosworth Field
C) First Battle of St Albans
D) Battle of Tewkesbury
Correct Answer
C) First Battle of St Albans
Question Analysis
This is a knowledge/comprehension-level question, testing students’ ability to recall the opening conflict. The distractors are all real battles from the war, making them plausible but incorrect. This avoids “easy elimination” and reinforces historical sequencing.
Item Analysis
(I’m using Datawrapper here due to a format limitation in Substack.)
This metadata helps you evaluate whether the question is accurate or fair before you put it on a test.
And now is an analysis-level question:
Question 6
How did the outcome of the Wars of the Roses change the balance of power between the monarchy and the nobility in England?
Answer Options
A) It weakened the monarchy and strengthened noble families.
B) It strengthened the monarchy by reducing noble power.
C) It led to the complete elimination of the nobility.
D) It restored feudalism to its peak power.
Correct Answer
B) It strengthened the monarchy by reducing noble power.
Question Analysis
This is an analysis-level question because students must connect the end of the wars to broader political change. Distractors reflect common misconceptions (e.g., feudalism increasing), ensuring that students must apply understanding rather than recall.
Item Analysis
Why multiple-choice questions?
There is ample evidence that many/most of practitioner-written and even commercially available multiple-choice questions are of poor quality as assessment tools, and almost always focus on learner recall over higher-order thinking.
For some, multiple-choice questions are musty assessment relics. I disagree, but OpenMCQ provides new avenues for crafting more uses of multiple-choice questions, both by making BETTER questions and by reducing the time it takes to create great questions for unique learning strategies.
Here are just a few examples of what you can create quickly to boost classroom instruction, engagement, and meta-cognition.
Fast Ideas for Using OpenMCQ
Here are 13 ways to use this tool beyond just “making a quiz.”
See OpenMCQ in Action
Our colleague Dr. Wes Fryer recently demonstrated how he used OpenMCQ to turn a YouTube video into a Kahoot quiz. It’s a great example of the tool in everyday practice.
📄 Blog post: Kahoot Quiz from YouTube Video via OpenMCQ
▶️ Demo video: Watch on YouTube
A Note on AI and Responsibility
Like all AI tools, OpenMCQ has limits. Treat it as a collaborative partner, not a replacement for your expertise. Always review the output.
However, because OpenMCQ explains the distractors and Bloom’s level, it makes the review process faster (and smarter) than a standard chatbot.
Future Versions
We are developing workflows in the Lab to map questions directly to learning objectives, creating a “test blueprint” system. Stay tuned—we’re just getting started! We also plan to release versions in Gemini and Claude.
Use OpenMCQ!
You can access the OpenMCA custom GPT at ChatGPT.
We also have a comprehensive guide to OpenMCQ, including our full bibliography.
Please share your feedback below or with the Lab AI help desk.


