Comet Overview

Apache DataFusion Comet is a high-performance accelerator for Apache Spark, built on top of the powerful Apache DataFusion query engine. Comet is designed to significantly enhance the performance of Apache Spark workloads while leveraging commodity hardware and seamlessly integrating with the Spark ecosystem without requiring any code changes.

The following diagram provides an overview of Comet’s architecture.

Comet Overview

Comet aims to support:

  • a native Parquet implementation, including both reader and writer

  • full implementation of Spark operators, including Filter/Project/Aggregation/Join/Exchange etc.

  • full implementation of Spark built-in expressions.

  • a UDF framework for users to migrate their existing UDF to native

Architecture

The following diagram shows how Comet integrates with Apache Spark.

Comet System Diagram

Feature Parity with Apache Spark

The project strives to keep feature parity with Apache Spark, that is, users should expect the same behavior (w.r.t features, configurations, query results, etc) with Comet turned on or turned off in their Spark jobs. In addition, Comet extension should automatically detect unsupported features and fallback to Spark engine.

To achieve this, besides unit tests within Comet itself, we also re-use Spark SQL tests and make sure they all pass with Comet extension enabled.

Getting Started

Refer to the Comet Installation Guide to get started.