analyze_lab_video_cell_behavior

Installation
SKILL.md

Analyze Lab Video — Cell Behavior

Overview

analyze_lab_video_cell_behavior converts raw time-lapse microscopy video or first-person XR lab recordings into quantitative cell biology data. The skill ingests brightfield, phase-contrast, or fluorescence video, runs single-cell tracking and phenotype classification through a VLM / computer-vision pipeline, and returns a structured JSON payload containing per-cell trajectories, population growth curves, migration statistics, and apoptosis/division event counts — turning unstructured lab footage into publication-ready metrics in a single step, fully aligned with the LabOS "from video to paper" vision.

When to Use This Skill

Use this skill when any of the following conditions are present:

  • Time-lapse microscopy analysis: A researcher has recorded brightfield, phase-contrast, DIC, or fluorescence (GFP, mCherry) time-lapse videos of cell cultures and needs automated quantification without manual cell counting or commercial software (Fiji, Imaris, Cellpose GUI).
  • XR lab recording playback: A first-person or overhead XR camera captured an ongoing cell culture experiment and the agent must retroactively extract cell behavior metrics from the footage.
  • Cell motility assays: Wound-healing (scratch assay), Boyden chamber, or transwell migration experiments require automated measurement of migration front velocity, closure rate, or directionality index.
  • Growth and proliferation quantification: Confluence over time, doubling time, or colony-forming unit (CFU) counts must be computed from phase-contrast or brightfield videos without manual inspection.
  • Apoptosis / cytotoxicity screening: A drug treatment experiment requires automatic detection of apoptotic morphology (membrane blebbing, cell shrinkage, nuclear condensation) at the population level for IC50 or Z-factor calculation.
  • Live-cell imaging pipelines: The lab runs high-content screening (HCS) or high-content imaging (HCI) and needs to programmatically extract phenotypic readouts from multi-well plate videos for batch processing.
  • Report or figure generation: Downstream tools (matplotlib, plotly, scientific-visualization, pptx-generation) need structured numeric inputs (trajectories, growth curves, event rates) from video that cannot be manually annotated at scale.
  • Multi-experiment comparison: Several video datasets from different drug doses, cell lines, or time points must be processed with a uniform pipeline to enable statistically comparable outputs.
Related skills

More from wu-yc/labclaw

Installs
14
Repository
wu-yc/labclaw
GitHub Stars
993
First Seen
Mar 15, 2026