anycap-blog-production
AnyCap Blog Production
Read this entire file before starting. This skill is for turning user-fed data into AnyCap-style articles, then adding evidence where the article needs proof.
Use this skill when the user provides facts, notes, examples, benchmarks, product capabilities, or other raw inputs and wants a finished blog post that sounds like the AnyCap website. The core job is:
- normalize the input data into a working brief
- draft a page in AnyCap's website tone
- decide whether the article needs first-party evidence blocks
- add AnyCap-generated visuals only when they materially improve the page
This skill is about blog production workflow. For raw CLI syntax, authentication, and command behavior, read the anycap-cli skill. For broader search-intent planning, read the anycap-ai-tool-seo skill.
Read these reference files before drafting:
Best Fit
More from anycap-ai/anycap
anycap-cli
AnyCap CLI -- create media humans can see and hear (generate images, produce video, compose music), understand media humans share (analyze images, video, audio), access the web (search, crawl), and deliver results humans can use (Drive for shareable file links, Page for hosted web pages). Use whenever a task involves creating visual or audio content, analyzing media, searching or reading the web, sharing files with humans, or publishing anything as a web page -- even if the user doesn't mention AnyCap by name. Also use for AnyCap authentication (login, API key, credentials), configuration, and feedback. Trigger on: image/video/music generation, media analysis, web search, web crawl, file sharing, page hosting, drive storage, delivering results to users, or any mention of AnyCap.
96anycap-deepresearch
Guide for conducting thorough, multi-source research and producing comprehensive, well-sourced reports. Powered by AnyCap -- the capability runtime that equips AI agents with web search (including AI Grounded citations), web crawl, image generation, cloud storage, and one-click web publishing through a single CLI. Use when the user asks for deep research, competitive analysis, market research, technical deep dive, literature review, technology comparison, or any task requiring multi-source information gathering and synthesis. Also use when users say \"investigate\", \"survey the landscape\", \"compare X vs Y\", \"state of the art\", \"write a report on\", \"look into\", \"find out about\", \"analyze the market\", or any inquiry that needs more than a single search. Trigger on mentions of research, analysis, investigation, comparison, report, survey, or deep dive.
73anycap-ai-tool-seo
Guide for planning and auditing SEO for AI tool, SaaS, and product-led websites. Powered by AnyCap -- the capability runtime that equips AI agents with web search and web crawl through a single CLI. Use when Codex needs to define SEO ICPs, map search intent to page types, inspect live SERPs, write page briefs for tool/comparison/alternatives/pricing/tutorial pages, prioritize technical SEO foundations, plan citations or backlinks, or decide whether programmatic SEO is safe and worthwhile. Trigger on mentions of AI tool SEO, SaaS SEO, product-led SEO, search intent, page type mapping, vs pages, alternatives pages, pricing pages, directory submissions, backlink plans, citations, or pSEO.
59anycap-media-production
Produce media assets using AnyCap: generate images, videos, and music from text or reference inputs, refine images through interactive visual annotation, and deliver finished assets. Covers the full production workflow from concept to delivery across all media types (image, video, music, audio). Use when creating images, videos, music, or any visual/audio content -- including iterative refinement with human feedback. Also use for image-to-image transformation, video generation from images, and annotation-driven precise edits. Trigger on: media production, asset generation, generate image/video/music, create visual content, produce assets, iterative image editing, annotate and refine, creative workflow, content creation, or any task requiring AI-generated media output.
47anycap-human-interaction
Collect structured visual feedback from humans using AnyCap's annotation tool, or create and iterate on diagrams using the interactive whiteboard (Excalidraw). Covers image annotation, URL/web page review with screen recording, video review, audio feedback, and collaborative diagramming with Mermaid input. Use when you need a human to point at things, mark regions, draw on screenshots, review a web page or UI, narrate feedback over a recording, provide any spatially-grounded visual input, create or iterate on architecture diagrams, flowcharts, or wireframes. Also use when you need to present work-in-progress to a human for approval or revision. Trigger on: get feedback, show to user, review UI, annotate, mark up, visual feedback, screen recording, user review, human-in-the-loop, approval flow, interactive review, whiteboard, diagram, draw, flowchart, wireframe, or architecture chart.
45anycap-social-meme-workflows
Create meme-style images, funny meme drawings, captioned photos, and lightweight social visuals with AnyCap. Use when an agent needs to turn a joke, reaction, campaign hook, quote, or screenshot into a meme image, funny doodle-style meme drawing, captioned image, or short meme-video concept. Combines AnyCap image generation and editing with deterministic local text overlay, optional video generation, and Drive or Page delivery.
34