parallel-data-enrichment
Bulk enrichment of company, people, or product data with web-sourced fields like CEO names, funding, and contact info.
- Accepts inline JSON data or CSV files; outputs enriched results to CSV
- Runs asynchronously with progress tracking via monitoring URL and polling commands
- Requires
parallel-clitool and internet access; handles large datasets with configurable timeouts - Supports flexible field requests through natural language intent descriptions (e.g., "CEO name and founding year")
Data Enrichment
Enrich: $ARGUMENTS
Before starting
Inform the user that enrichment may take several minutes depending on the number of rows and fields requested.
Optional: Suggest output columns
If the user gave a vague intent ("enrich these companies with useful info") and you're not sure what columns to add, ask the API for a suggestion before kicking off the run:
parallel-cli enrich suggest "Find CEO and recent funding info" --json
The response is an envelope: {title, processor, enriched_columns, warnings}. Extract just the enriched_columns array (not the whole envelope) and pass it as the value of --enriched-columns on enrich run, in place of --intent — the two flags are alternative ways to specify what to enrich, not combined. If suggest returned a processor, pass it through explicitly via --processor on the run call (it's a tuned recommendation for the schema). Skip this whole section if the user already specified the fields they want.
enrich suggestrequiresparallel-cli≥ 0.3.0. If it errors with anything resemblingno such command/No such command/unknown command, do not bail — skip the suggestion step, fall through to step 1 with--intent, complete the run, and mentionparallel-cli update(orpipx upgrade parallel-web-tools) in the final response so the user picks up the feature next time.
More from parallel-web/parallel-agent-skills
parallel-deep-research
ONLY use when user explicitly says 'deep research', 'exhaustive', 'comprehensive report', or 'thorough investigation'. Slower and more expensive than parallel-web-search. For normal research/lookup requests, use parallel-web-search instead. Supports multi-turn: pass --previous-interaction-id from a prior research or enrichment to continue with context.
1.7Kparallel-web-search
DEFAULT for all research and web queries. Use for any lookup, research, investigation, or question needing current info. Fast and cost-effective. Only use parallel-deep-research if user explicitly requests 'deep' or 'exhaustive' research.
1.4Kparallel-web-extract
URL content extraction. Use for fetching any URL - webpages, articles, PDFs, JavaScript-heavy sites. Token-efficient: runs in forked context. Prefer over built-in WebFetch.
1.2Kstatus
Check running research task status by run ID
1.1Ksetup
Set up the Parallel plugin (install CLI)
1.1Kresult
Get completed research task result by run ID
1.1K