Skip to main content

Utility belt to handle data on AWS.

Project description

AWS Data Wrangler

Utility belt to handle data on AWS.

Documentation Status

Read the documentation


Contents: Use Cases | Installation | Examples | Diving Deep


Use Cases

Pandas

  • Pandas -> Parquet (S3) (Parallel :rocket:)
  • Pandas -> CSV (S3) (Parallel :rocket:)
  • Pandas -> Glue Catalog
  • Pandas -> Athena (Parallel :rocket:)
  • Pandas -> Redshift (Parallel :rocket:)
  • CSV (S3) -> Pandas (One shot or Batching)
  • Athena -> Pandas (One shot or Batching)
  • CloudWatch Logs Insights -> Pandas (NEW :star:)
  • Encrypt Pandas Dataframes on S3 with KMS keys (NEW :star:)

PySpark

  • PySpark -> Redshift (Parallel :rocket:) (NEW :star:)

General

  • List S3 objects (Parallel :rocket:)
  • Delete S3 objects (Parallel :rocket:)
  • Delete listed S3 objects (Parallel :rocket:)
  • Delete NOT listed S3 objects (Parallel :rocket:)
  • Copy listed S3 objects (Parallel :rocket:)
  • Get the size of S3 objects (Parallel :rocket:)
  • Get CloudWatch Logs Insights query results (NEW :star:)

Installation

pip install awswrangler

Runs only with Python 3.6 and beyond.

Runs anywhere (AWS Lambda, AWS Glue, EMR, EC2, on-premises, local, etc).

P.S. Lambda Layer bundle and Glue egg are available to download. It's just upload to your account and run! :rocket:

Examples

Pandas

Writing Pandas Dataframe to S3 + Glue Catalog

session = awswrangler.Session()
session.pandas.to_parquet(
    dataframe=dataframe,
    database="database",
    path="s3://...",
    partition_cols=["col_name"],
)

If a Glue Database name is passed, all the metadata will be created in the Glue Catalog. If not, only the s3 data write will be done.

Writing Pandas Dataframe to S3 as Parquet encrypting with a KMS key

extra_args = {
    "ServerSideEncryption": "aws:kms",
    "SSEKMSKeyId": "YOUR_KMY_KEY_ARN"
}
session = awswrangler.Session(s3_additional_kwargs=extra_args)
session.pandas.to_parquet(
    path="s3://..."
)

Reading from AWS Athena to Pandas

session = awswrangler.Session()
dataframe = session.pandas.read_sql_athena(
    sql="select * from table",
    database="database"
)

Reading from AWS Athena to Pandas in chunks (For memory restrictions)

session = awswrangler.Session()
dataframe_iter = session.pandas.read_sql_athena(
    sql="select * from table",
    database="database",
    max_result_size=512_000_000  # 512 MB
)
for dataframe in dataframe_iter:
    print(dataframe)  # Do whatever you want

Reading from S3 (CSV) to Pandas

session = awswrangler.Session()
dataframe = session.pandas.read_csv(path="s3://...")

Reading from S3 (CSV) to Pandas in chunks (For memory restrictions)

session = awswrangler.Session()
dataframe_iter = session.pandas.read_csv(
    path="s3://...",
    max_result_size=512_000_000  # 512 MB
)
for dataframe in dataframe_iter:
    print(dataframe)  # Do whatever you want

Reading from CloudWatch Logs Insights to Pandas

session = awswrangler.Session()
dataframe = session.pandas.read_log_query(
    log_group_names=[LOG_GROUP_NAME],
    query="fields @timestamp, @message | sort @timestamp desc | limit 5",
)

Typical Pandas ETL

import pandas
import awswrangler

df = pandas.read_...  # Read from anywhere

# Typical Pandas, Numpy or Pyarrow transformation HERE!

session = awswrangler.Session()
session.pandas.to_parquet(  # Storing the data and metadata to Data Lake
    dataframe=dataframe,
    database="database",
    path="s3://...",
    partition_cols=["col_name"],
)

PySpark

Loading PySpark Dataframe to Redshift

session = awswrangler.Session(spark_session=spark)
session.spark.to_redshift(
    dataframe=df,
    path="s3://...",
    connection=conn,
    schema="public",
    table="table",
    iam_role="IAM_ROLE_ARN",
    mode="append",
)

General

Deleting a bunch of S3 objects (parallel :rocket:)

session = awswrangler.Session()
session.s3.delete_objects(path="s3://...")

Get CloudWatch Logs Insights query results

session = awswrangler.Session()
results = session.cloudwatchlogs.query(
    log_group_names=[LOG_GROUP_NAME],
    query="fields @timestamp, @message | sort @timestamp desc | limit 5",
)

Diving Deep

Pandas to Redshift Flow

Pandas to Redshift Flow

Spark to Redshift Flow

Spark to Redshift Flow

Project details


Release history Release notifications | RSS feed

This version

0.0.1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

awswrangler-0.0.1.tar.gz (27.4 kB view hashes)

Uploaded Source

Built Distribution

awswrangler-0.0.1-py36,py37-none-any.whl (30.6 kB view hashes)

Uploaded Python 3.6,py37

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page