Skip to main content

A simple Parquet converter for JSON/python data

Project description

This library wraps pyarrow to provide some tools to easily convert JSON data into Parquet format. It is mostly in Python. It iterates over files. It copies the data several times in memory. It is not meant to be the fastest thing available. However, it is convenient for smaller data sets, or people who don’t have a huge issue with speed.

Installation

pip install json2parquet

Usage

Here’s how to load a random JSON dataset.

from json2parquet import convert_json

# Infer Schema (requires reading dataset for column names)
convert_json(input_filename, output_filename)

# Given columns
convert_json(input_filename, output_filename, ["my_column", "my_int"])

# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
    pa.field('my_column', pa.string),
    pa.field('my_int', pa.int64),
])
convert_json(input_filename, output_filename, schema)

You can also work with Python data structures directly

from json2parquet import load_json, ingest_data, write_parquet

# Loading JSON to a PyArrow RecordBatch (schema is optional as above)
load_json(input_filename, schema)

# Working with a list of dictionaries
ingest_data(input_data, schema)

# Writing Parquet Files from PyArrow Record Batches
write_parquet(data, destination)

# You can also pass any keyword arguments that PyArrow accepts
write_parquet(data, destination, compression='snappy')

Although json2parquet can infer schemas, it has helpers to pull in external ones as well

from json2parquet import load_json
from json2parquet.helpers import get_schema_from_redshift

# Fetch the schema from Redshift (requires psycopg2)
schema = get_schema_from_redshift(redshift_schema, redshift_table, redshift_uri)

# Load JSON with the Redshift schema
load_json(input_filename, schema)

Operational Notes

If you are using this library to convert JSON data to be read by Spark, Athena, Spectrum or Presto make sure you use use_deprecated_int96_timestamps when writing your Parquet files, otherwise you will see some really screwy dates.

Contributing

Code Changes

  • Clone a fork of the library

  • Run make setup

  • Run make test

  • Apply your changes (don’t bump version)

  • Add tests if needed

  • Run make test to ensure nothing broke

  • Submit PR

Documentation Changes

It is always a struggle to keep documentation correct and up to date. Any fixes are welcome. If you don’t want to clone the repo to work locally, please feel free to edit using Github and to submit Pull Requests via Github’s built in features.

Changelog

0.0.7

  • Force converted Timestamps to max out at pandas.Timestamp.max if they exceed the resolution of datetime[ns]

0.0.6

  • Add automatic downcasting for Python float to float32 via pandas when schema specifies pa.float32()

0.0.5

  • Fix conversion of float types to be size specific

0.0.4

  • Fix ingestion of timestamp data with ns resolution

0.0.3

  • Add pandas dependency

  • Add proper ingestion of timestamp data using Pandas to_datetime

0.0.2

  • Fix formatting of README so it displays on PyPI

0.0.1

  • Initial release

  • JSON/data writing support

  • Redshift Schema reading support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

json2parquet-0.0.7.tar.gz (6.0 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

json2parquet-0.0.7-py3-none-any.whl (8.6 kB view details)

Uploaded Python 3

json2parquet-0.0.7-py2-none-any.whl (8.6 kB view details)

Uploaded Python 2

File details

Details for the file json2parquet-0.0.7.tar.gz.

File metadata

  • Download URL: json2parquet-0.0.7.tar.gz
  • Upload date:
  • Size: 6.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for json2parquet-0.0.7.tar.gz
Algorithm Hash digest
SHA256 9b5cb0ecd64d2df5bba6f0b8cf42f784d5eb55e76c14ec6dc3601e6937b28573
MD5 7824a7a28f75d8b351f70a789e1f7abb
BLAKE2b-256 681d0f3ab9116bdbc48895c37c409372b69760076061d19c3cf794b642845a11

See more details on using hashes here.

File details

Details for the file json2parquet-0.0.7-py3-none-any.whl.

File metadata

File hashes

Hashes for json2parquet-0.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 a921a13957834e95df906dafe13eaec49dea34e6fb74cf35569c806f0df165d8
MD5 62edcd180d08caefbbeecf459c7f874d
BLAKE2b-256 ac37a2319557ea6ee8375fb1b71ff598591c53f8d0bb989c23aeaf94ffafe8e5

See more details on using hashes here.

File details

Details for the file json2parquet-0.0.7-py2-none-any.whl.

File metadata

File hashes

Hashes for json2parquet-0.0.7-py2-none-any.whl
Algorithm Hash digest
SHA256 b7fbff4188809a3c9c184af405be82cc94fd74e292902a69870682d7a5f1d8d9
MD5 5b0fb9b94e8b28a2751f103167921fab
BLAKE2b-256 6f9b067822521046ac426aca38aaea36c18ebcc0a1436495f0a710135ad22489

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page