meta-prompt
Meta-Prompt
A collection of meta-prompting techniques for evaluating and analyzing AI responses and solution paths.
Response Quality Evaluator
A framework for critiquing and reflecting on the quality of responses, providing a score and indicating whether the response has fully solved the question or task.
Evaluation Fields
Reflections: The critique and reflections on the sufficiency, superfluency, and general quality of the response.
Score: Score from 0-10 on the quality of the candidate response.
Found_solution: Whether the response has fully solved the question or task.
Evaluation Criteria
When evaluating responses, consider the following:
More from mindrally/skills
fastapi-python
Expert in FastAPI Python development with best practices for APIs and async operations
8.6Knextjs-react-typescript
Expert in TypeScript, Node.js, Next.js App Router, React, Shadcn UI, Radix UI and Tailwind
2.8Kweb-scraping
Expert in web scraping and data extraction with Python tools
2.3Kcomputer-vision-opencv
Expert guidance for computer vision development using OpenCV, PyTorch, and modern deep learning techniques for image and video processing.
1.9Kaccessibility-a11y
Implement web accessibility (a11y) best practices following WCAG guidelines to create inclusive, accessible user interfaces.
1.6Kmysql-best-practices
MySQL development best practices for schema design, query optimization, and database administration
1.6K