nnunet-segmentation
nnU-Net Automated Medical Image Segmentation
Overview
nnU-Net (no-new-Net) is a self-configuring deep learning framework for biomedical image segmentation. Given a labeled training dataset, nnU-Net automatically determines the optimal network architecture (2D, 3D full-resolution, or 3D cascade), preprocessing steps (resampling, normalization, patch size), training schedule, and post-processing. It consistently achieves state-of-the-art performance across diverse imaging modalities and anatomical structures without manual hyperparameter tuning. nnU-Net v2 (nnunetv2) is the current release with a Python API for inference alongside the standard CLI.
When to Use
- Segmenting anatomical structures in CT or MRI scans (organs, tumors, lesions) when you have 20+ annotated training cases
- Automating cell or nucleus segmentation in 3D fluorescence or electron microscopy volumes
- Establishing a strong baseline for any new segmentation challenge without manually tuning a U-Net
- Running inference on new images using a pretrained nnU-Net model from a published challenge
- Comparing segmentation methods: nnU-Net's auto-configured ensembles serve as a rigorous baseline
- Building production segmentation pipelines where training is done once and inference is repeated on many new cases
- Use Cellpose (
cellpose-cell-segmentation) instead for 2D fluorescence cell segmentation without labeled training data; nnU-Net requires annotated training cases - Use SimpleITK (
simpleitk-image-registration) instead for rule-based segmentation with classical thresholding and region growing on images where deep learning training data is unavailable
Prerequisites
More from jaechang-hits/sciagent-skills
scientific-brainstorming
Structured ideation methods: SCAMPER, Six Thinking Hats, Morphological Analysis, TRIZ, Biomimicry, plus more. Decision framework for picking methods by challenge type (stuck, improving, systematic exploration, contradiction). Use when generating research ideas or exploring interdisciplinary connections.
12snakemake-workflow-engine
Python-based workflow management system for reproducible, scalable pipelines. Define rules with file-based dependencies; Snakemake automatically determines the execution order and parallelism. Supports local, SLURM, LSF, AWS, and Google Cloud execution via profiles; per-rule conda/Singularity environments. Use for bioinformatics NGS pipelines, ML training workflows, and any multi-step file-processing analysis. Use Nextflow instead for Groovy-based dataflow pipelines or when nf-core ecosystem integration is required.
11esm-protein-language-model
Protein language models (ESM3, ESM C) for sequence generation, structure prediction, inverse folding, and protein embeddings. Use when designing novel proteins, extracting sequence representations for downstream ML, or predicting structure from sequence. Local GPU or EvolutionaryScale Forge cloud API. For traditional structure prediction use AlphaFold; for small-molecule cheminformatics use RDKit.
11biopython-sequence-analysis
Biopython sequence analysis: parse FASTA/FASTQ/GenBank/GFF (SeqIO), NCBI Entrez (esearch/efetch/elink), remote/local BLAST, pairwise/MSA alignment (PairwiseAligner, MUSCLE/ClustalW), phylogenetic trees (Phylo). Use for gene family studies, phylogenomics, comparative genomics, NCBI pipelines. For PCR/restriction/cloning use biopython-molecular-biology; for SAM/BAM use pysam.
11shap-model-explainability
>-
11archs4-database
Query ARCHS4 REST API for uniformly processed RNA-seq expression, tissue patterns, co-expression across 1M+ human/mouse samples. Retrieve z-scores, co-expressed genes, samples by metadata, HDF5 matrices. For variant population genetics use gnomad-database; for pathway enrichment use gget-genomic-databases (Enrichr).
11