spark

Installation
SKILL.md

spark

Purpose

Apache Spark is a fast, distributed processing framework for handling large-scale data sets using in-memory computing. It enables efficient batch processing, real-time analytics, machine learning, and graph processing on clusters.

When to Use

Use Spark for processing datasets larger than a single machine's memory, such as analyzing terabytes of log data or running ETL jobs. Apply it in scenarios requiring fast iterative computations, like machine learning algorithms, or when integrating with big data ecosystems like Hadoop. Avoid it for small-scale tasks where simpler tools like Pandas suffice.

Key Capabilities

  • In-memory caching for speeding up iterative algorithms, e.g., via persist(StorageLevel.MEMORY_ONLY).
  • Fault-tolerant distributed computing with RDDs (Resilient Distributed Datasets) for automatic recovery.
  • Support for multiple languages: Scala, Python, Java, R; e.g., use PySpark for data frames with from pyspark.sql import SparkSession.
  • Built-in libraries: Spark SQL for structured data queries, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for real-time data.
  • Scalability to thousands of nodes, with dynamic resource allocation via YARN or Kubernetes.

Usage Patterns

To process data with Spark, start by creating a SparkSession in your code. For batch jobs, submit via spark-submit; for interactive work, use Spark shells. Always specify the master URL, like "yarn" for cluster mode. Handle data sources by reading from files or databases, transforming with DataFrames, and writing outputs. For streaming, use Structured Streaming to process Kafka topics in real-time.

Related skills
Installs
22
GitHub Stars
5
First Seen
Mar 7, 2026