evaluating-code-models

Installation
SKILL.md

BigCode Evaluation Harness - Code Model Benchmarking

Quick Start

BigCode Evaluation Harness evaluates code generation models across 15+ benchmarks including HumanEval, MBPP, and MultiPL-E (18 languages).

Installation:

git clone https://github.com/bigcode-project/bigcode-evaluation-harness.git
cd bigcode-evaluation-harness
pip install -e .
accelerate config

Evaluate on HumanEval:

accelerate launch main.py \
  --model bigcode/starcoder2-7b \
  --tasks humaneval \
Related skills
Installs
5
GitHub Stars
5
First Seen
Mar 28, 2026