building-data-apps
Building Data Apps
Use this skill to create interactive web applications that let stakeholders explore data, interact with ML models, and access analytics without writing code.
When to use this skill
- Stakeholder dashboards — executives, product managers, or clients need self-service data access
- ML model demos — let users test predictions with their own inputs
- Internal data tools — operations teams need forms, filters, and reporting
- Data exploration for non-coders — business users need to drill into datasets
- Prototyping before full engineering — validate UX quickly with Python
- A/B testing interfaces — experiment with different presentations of results
- Multi-user analytics — shared tools accessed via browser (not notebooks)
When NOT to use this skill
Use a different skill for these related but distinct tasks:
| Instead of... | Use this skill | Because... |
More from legout/data-platform-agent-skills
data-science-eda
Exploratory Data Analysis (EDA): profiling, visualization, correlation analysis, and data quality checks. Use when understanding dataset structure, distributions, relationships, or preparing for feature engineering and modeling.
13data-science-visualization
Data visualization for Python: Matplotlib, Seaborn, Plotly, Altair, hvPlot/HoloViz, and Bokeh. Use when creating exploratory charts, interactive dashboards, publication-quality figures, or choosing the right library for your data and audience.
12data-engineering-core
Core Python data engineering: Polars, DuckDB, PyArrow, PostgreSQL, ETL patterns, performance tuning, and resilient pipeline construction. Use when building or reviewing batch ETL/dataframe/SQL pipelines in Python.
10data-science-feature-engineering
Feature engineering for machine learning: encoding, scaling, transformations, datetime features, text features, and feature selection. Use when preparing data for modeling or improving model performance through better representations.
10data-science-notebooks
Interactive notebooks for data science: Jupyter, JupyterLab, and marimo. Use for exploratory analysis, reproducible research, documentation, and sharing insights with stakeholders.
9data-engineering-best-practices
Data engineering best practices: medallion architecture, dataset lifecycle, partitioning, file sizing, schema evolution, and append/overwrite/merge patterns across Polars, PyArrow, DuckDB, Delta Lake, and Iceberg. Use when designing production data pipelines or reviewing data platform decisions.
8