Skip to main content

A simple Parquet converter for JSON/python data

Project description

This library wraps pyarrow to provide some tools to easily convert JSON data into Parquet format. It is mostly in Python. It iterates over files. It copies the data several times in memory. It is not meant to be the fastest thing available. However, it is convenient for smaller data sets, or people who don’t have a huge issue with speed.

Installation

pip install json2parquet

Usage

Here’s how to load a random JSON dataset.

from json2parquet import convert_json

# Infer Schema (requires reading dataset for column names)
convert_json(input_filename, output_filename)

# Given columns
convert_json(input_filename, output_filename, ["my_column", "my_int"])

# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
    pa.field('my_column', pa.string),
    pa.field('my_int', pa.int64),
])
convert_json(input_filename, output_filename, schema)

You can also work with Python data structures directly

from json2parquet import load_json, ingest_data, write_parquet, write_parquet_dataset

# Loading JSON to a PyArrow RecordBatch (schema is optional as above)
load_json(input_filename, schema)

# Working with a list of dictionaries
ingest_data(input_data, schema)

# Writing Parquet Files from PyArrow Record Batches
write_parquet(data, destination)

# You can also pass any keyword arguments that PyArrow accepts
write_parquet(data, destination, compression='snappy')

# You can also write partitioned date
write_parquet_dataset(data, destination_dir, partition_cols=["foo", "bar", "baz"])

If you know your schema, you can specify custom datetime formats (only one for now). This formatting will be ignored if you don’t pass a PyArrow schema.

from json2parquet import convert_json

# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
    pa.field('my_column', pa.string),
    pa.field('my_int', pa.int64),
])
date_format = "%Y-%m-%dT%H:%M:%S.%fZ"
convert_json(input_filename, output_filename, schema, date_format=date_format)

Although json2parquet can infer schemas, it has helpers to pull in external ones as well

from json2parquet import load_json
from json2parquet.helpers import get_schema_from_redshift

# Fetch the schema from Redshift (requires psycopg2)
schema = get_schema_from_redshift(redshift_schema, redshift_table, redshift_uri)

# Load JSON with the Redshift schema
load_json(input_filename, schema)

Operational Notes

If you are using this library to convert JSON data to be read by Spark, Athena, Spectrum or Presto make sure you use use_deprecated_int96_timestamps when writing your Parquet files, otherwise you will see some really screwy dates.

Contributing

Code Changes

  • Clone a fork of the library

  • Run make setup

  • Run make test

  • Apply your changes (don’t bump version)

  • Add tests if needed

  • Run make test to ensure nothing broke

  • Submit PR

Documentation Changes

It is always a struggle to keep documentation correct and up to date. Any fixes are welcome. If you don’t want to clone the repo to work locally, please feel free to edit using Github and to submit Pull Requests via Github’s built in features.

Changelog

0.0.15

  • Add support for custom datetime formatting (thanks @Madhu1512)

  • Add support for writing partitioned datasets (thanks @mthota15)

0.0.14

  • Stop silencing Redshift errors.

0.0.13

  • Fix decimal type for newer pyarrow versions

0.0.12

  • Allow casting of int64 -> int32

0.0.11

  • Bump PyArrow and allow int32 data

0.0.10

  • Allow passing partition columns when getting a Redshift schema, so they can be skipped

0.0.9

  • Fix conversion of timestamp columns again

0.0.8

  • Fix conversion of timestamp columns

0.0.7

  • Force converted Timestamps to max out at pandas.Timestamp.max if they exceed the resolution of datetime[ns]

0.0.6

  • Add automatic downcasting for Python float to float32 via pandas when schema specifies pa.float32()

0.0.5

  • Fix conversion of float types to be size specific

0.0.4

  • Fix ingestion of timestamp data with ns resolution

0.0.3

  • Add pandas dependency

  • Add proper ingestion of timestamp data using Pandas to_datetime

0.0.2

  • Fix formatting of README so it displays on PyPI

0.0.1

  • Initial release

  • JSON/data writing support

  • Redshift Schema reading support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

json2parquet-0.0.15.tar.gz (6.6 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

json2parquet-0.0.15-py3-none-any.whl (9.5 kB view details)

Uploaded Python 3

json2parquet-0.0.15-py2-none-any.whl (9.6 kB view details)

Uploaded Python 2

File details

Details for the file json2parquet-0.0.15.tar.gz.

File metadata

  • Download URL: json2parquet-0.0.15.tar.gz
  • Upload date:
  • Size: 6.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for json2parquet-0.0.15.tar.gz
Algorithm Hash digest
SHA256 3e3c82ff557e8846fbaee93c37111f3e0fccd4bdf7bc208c2147344ebe5717ee
MD5 67a2397f4a965f27d8157cd136562f2a
BLAKE2b-256 80e5285271579b5b704a7172f404e534125f0d5c85557dcba5b507bcdf75f623

See more details on using hashes here.

File details

Details for the file json2parquet-0.0.15-py3-none-any.whl.

File metadata

File hashes

Hashes for json2parquet-0.0.15-py3-none-any.whl
Algorithm Hash digest
SHA256 0eeceb595d5ce1f3b84dd872124adc599b70461d7dc46000aed4063a87d6b6d3
MD5 267bc9a1d6c4213a4f2bc089053c14d8
BLAKE2b-256 a9b18ebb2b1d67056de595a704a0db01aec63814613b3577e0ef478313619847

See more details on using hashes here.

File details

Details for the file json2parquet-0.0.15-py2-none-any.whl.

File metadata

File hashes

Hashes for json2parquet-0.0.15-py2-none-any.whl
Algorithm Hash digest
SHA256 c179e4894ed81a5a3df2534f979091a4bc0fb56f32c3ee7460f737a869f781e5
MD5 5f702071a365e4e6a334dfcff9a28355
BLAKE2b-256 8c20ddd96527422c8c8cce47131883be996b2f8b645a7ca6a8f849a13eed24ac

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page