archiving-databases

Installation
SKILL.md

Database Archival System

Overview

Implement automated data archival pipelines that move historical records from primary database tables to archive storage (archive tables, S3, Azure Blob, or GCS) based on age, status, or access frequency criteria.

Prerequisites

  • Database credentials with SELECT, INSERT, and DELETE permissions on source and archive tables
  • Cloud storage credentials (AWS S3, Azure Blob, or GCS) if archiving to cold storage
  • psql or mysql CLI for executing archival queries
  • aws s3, az storage, or gsutil CLI for cloud storage uploads
  • Understanding of data retention requirements and compliance policies (GDPR, HIPAA, SOX)
  • Current table sizes: SELECT pg_size_pretty(pg_total_relation_size('table_name')) to identify archival candidates

Instructions

  1. Identify archival candidates by finding large tables with time-based data:
    • SELECT relname, n_live_tup, pg_size_pretty(pg_total_relation_size(relid)) FROM pg_stat_user_tables ORDER BY pg_total_relation_size(relid) DESC LIMIT 10
    • Focus on tables where historical data is rarely queried: logs, audit trails, events, old orders, expired sessions
Related skills
Installs
25
GitHub Stars
2.2K
First Seen
Feb 18, 2026