cv-classification
Image Classification Best Practice
Architecture selection:
- Small scale (CIFAR-10/100): ResNet-18/34, WideResNet, Simple ViT
- Medium scale: ResNet-50, EfficientNet-B0/B1, DeiT-Small
- Large scale: ViT-B/16, ConvNeXt, Swin Transformer
Training recipe:
- Optimizer: AdamW (lr=1e-3 to 3e-4) or SGD (lr=0.1 with cosine decay)
- Weight decay: 0.01-0.1 for AdamW, 5e-4 for SGD
- Data augmentation: RandomCrop, RandomHorizontalFlip, Cutout/CutMix
- Warmup: 5-10 epochs linear warmup for transformers
- Batch size: 128-256 for CNNs, 512-1024 for ViTs (if memory allows)
Standard benchmarks:
- CIFAR-10: ~96% (ResNet-18), ~97% (WideResNet)
- CIFAR-100: ~80% (ResNet-18), ~84% (WideResNet)
- ImageNet: ~76% (ResNet-50), ~81% (ViT-B/16)
More from aiming-lab/autoresearchclaw
scientific-writing
Academic manuscript writing with IMRAD structure, citation formatting, and reporting guidelines. Use when drafting or revising research papers.
13scientific-visualization
Publication-ready scientific figure design with matplotlib and seaborn. Use when creating journal submission figures with proper formatting, accessibility, and statistical annotations.
12literature-search
Systematic literature review methodology including search strategy, screening, and synthesis. Use when conducting literature reviews or writing background sections.
12statistical-reporting
Statistical test selection, assumption checking, and APA-formatted reporting. Use when analyzing experimental results or writing results sections.
11hypothesis-formulation
Structured scientific hypothesis generation from observations. Use when formulating testable hypotheses, competing explanations, or experimental predictions.
10a-evolve
>
9