flashkda-delta-attention
FlashKDA Delta Attention Skill
Skill by ara.so — Daily 2026 Skills collection.
FlashKDA provides high-performance CUDA kernels for Kimi Delta Attention (KDA) built on CUTLASS. It targets SM90+ GPUs (H100/H20 class) and integrates as a drop-in backend for flash-linear-attention's chunk_kda operation.
Requirements
- GPU: SM90+ (H100, H20, or newer)
- CUDA 12.9+
- PyTorch 2.4+
- Python 3.8+
Installation
git clone https://github.com/MoonshotAI/FlashKDA.git flash-kda
cd flash-kda
git submodule update --init --recursive
More from aradotso/trending-skills
openclaw-control-center
Local-first, security-first control center for OpenClaw agents — visibility dashboard with readonly defaults, token attribution, collaboration tracing, and safe write operations.
3.9Kinkos-multi-agent-novel-writing
Multi-agent CLI system for autonomous novel writing, auditing, and revision with human review gates
1.8Keverything-claude-code-harness
Agent harness performance system for Claude Code and other AI coding agents — skills, instincts, memory, hooks, commands, and security scanning
1.6Kagency-agents-ai-specialists
A collection of specialized AI agent personalities for Claude Code, Cursor, Aider, Windsurf, and other AI coding tools — covering engineering, design, marketing, sales, and more.
1.6Kui-ux-pro-max-skill
AI design intelligence skill for building professional UI/UX across multiple platforms with 161 reasoning rules, 67 styles, and automated design system generation
1.5Kunderstand-anything-knowledge-graph
Turn any codebase into an interactive knowledge graph using Claude Code skills — explore, search, and ask questions about any project visually.
1.5K