splunk-platform
Splunk Platform
Use this as the default skill for Splunk work. It should answer most Splunk framework-selection questions directly and send you to exactly one or two reference files for implementation details.
Read only the references that match the task:
references/spl2-authoring.mdfor writing SPL2 modules, searches, custom functions, types, views, and pipelinesreferences/python-sdk.mdfor Python automation,splunklib, and result parsingreferences/javascript-sdk.mdfor Node/browser JS SDK workreferences/rest-search-patterns.mdfor raw REST, search jobs, and export patternsreferences/itsi-implementation.mdfor concrete ITSI entity integrations, HEC setup, service onboarding, correlation searches, and notable-event aggregation workflowsreferences/itsi-av-example.mdfor a concrete end-to-end ITSI onboarding pattern using entity imports, service templates, and template-linked service importsreferences/admin-searches.mdfor read-only admin/discovery SPLreferences/ucc-framework.mdfor add-ons, modular inputs, setup pages, and alert actionsreferences/dashboard-development.mdfor Dashboard Studio and Simple XMLreferences/mcp-integration.mdfor agent-facing Splunk tool designreferences/platform-admin.mdfor install/upgrade/deployment automation
More from kundeng/bayeslearner-skills
analytic-workbench
Use this skill for analytics and data-science workflow setup, exploratory analysis, notebook-first EDA, repo normalization for analysis projects, experiment comparison, AutoML, causal analysis, and promotion from ad hoc exploration into reusable pipelines. Trigger when the user asks for analysis best practices, how to structure an analytics repo, how to organize notebooks and runs, whether to use marimo or Quarto/qmd, how to handle experiment sweeps, how to compare models, or how to make analysis reproducible. Also trigger on phrases such as analytic workbench, EDA, exploratory analysis, notebook workflow, analytics pipeline, reproducible analysis, experiment sweep, hyperparameter comparison, comparison table, marimo, Quarto, qmd, Hamilton, sf-hamilton, dataflow, DAG driver, Hydra, DVC, Kedro, MLflow, AutoML, PyCaret, causal analysis, feature engineering, or model review.
11spec-driven-dev
Spec-driven development: plan → go → review loop with spec lifecycle states and a project-level feature ledger. Use for planning features, implementing from specs, refining specs, tracking what features exist across specs, and resuming work. Trigger on requests mentioning specs, requirements/design/tasks, spec-help, spec-plan, feature ledger, FEATURES.md, spec-ledger, `.kiro`. IMPORTANT: Never edit spec files without first reading this skill.
10design2spec
Convert UI designs into structured JSONC spec packages before code is written, especially for constrained platforms like extensions, dashboards, desktop shells, and mobile apps. Use for design handoff and design-to-spec workflows. Outputs specs, not implementation code.
7workflow-guardrails
Use this skill for agent execution discipline on development and analysis projects: inspect the repo before restructuring, keep durable truth in repo artifacts instead of chat memory, maintain specs/tasks/status docs, verify work honestly, avoid shortcuts, and keep moving through the next concrete work item when the human is away. Trigger when the user asks for workflow discipline, project hygiene, execution guardrails, repo normalization, or when a task risks drifting across architecture, storage, specs, continuity, or tooling boundaries.
7wise-scraper
Structured web scraping for AI coders: explore, then exploit with shipped templates, runner, and hooks.
6resume-claude-here
Recover a prior Claude Code session from natural-language hints, search Claude history by topic/date/project, and import the useful context into the current conversation. Use this for Claude session handoff, transcript recovery, context transfer into Codex or another agent, and continuing after Claude hit a usage or rate limit.
6