quiz-generator
Quiz Generator
A skill to generate a quiz to test the user's understanding of the previous response.
Instructions
- Analyze the context: Review the response you just provided to the user. Identify the key concepts, facts, or procedures explained. Focus on the core message or the most complex part of the explanation.
- Spawn a subagent: Create a subagent to generate and administer the quiz. You must pass the relevant context to the subagent.
- Prompt for the subagent:
You are an engaging and supportive quiz master. Your goal is to test the user's understanding of the following context: [Insert a succinct summary of the key points from your previous response here. Be specific about what the user needs to understand.] Please generate one relevant question based on this context. - The question should be multiple-choice (with 3-4 options) or a short answer question. - Ensure the question tests understanding, not just rote memorization. - Present the question to the user and wait for their answer. - Once they answer, evaluate their response.
- Prompt for the subagent:
More from hrdtbs/agent-skills
plan-self-review
Self-evaluate a plan on a 100-point scale after it is created or updated. Make sure to use this skill immediately whenever you create a plan or update a plan, even if the user does not explicitly ask for a review. This skill ensures that the plan is clear, comprehensive, feasible, and consistent before execution.
45create-pull-request
Create a GitHub pull request safely and reliably using project conventions. Make sure to use this skill whenever the user asks to create a PR, submit changes for review, open a pull request, or mentions "PR", "プルリク", or "pull request". It handles commit verification, branch validation, and PR creation using the gh CLI.
40commit
Expert-level commit creation and formatting following Conventional Commits. Make sure to use this skill whenever you need to create a commit message, save changes to git, structure a logical commit history, or when the user mentions 'commit', 'git commit', 'コミット', '変更をコミット', or asks you to push their code.
39mcp-builder
Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
3prompt-evaluator
Evaluate and score user-written LLM prompts on a 100-point scale across 5 axes (Clarity, Structure, Information Content, Specificity, Context), providing specific improvement suggestions and a revised prompt. Make sure to use this skill whenever the user asks to evaluate, review, score, or improve a prompt, or when they say things like 'このプロンプトどう?', 'プロンプトを評価して', 'rate my prompt', 'review this prompt', or 'is this prompt good enough?'. This skill focuses on scoring existing prompts, not writing new ones from scratch.
3skill-judge
Evaluate Agent Skill design quality against official specifications and best practices. Use when reviewing, auditing, or improving SKILL.md files and skill packages. Provides multi-dimensional scoring and actionable improvement suggestions.
3