Skip to main content

A simple Parquet converter for JSON/python data

Project description

Json2Parquet |Build Status|
===========================

This library wraps ``pyarrow`` to provide some tools to easily convert
JSON data into Parquet format. It is mostly in Python. It iterates over
files. It copies the data several times in memory. It is not meant to be
the fastest thing available. However, it is convenient for smaller data
sets, or people who don't have a huge issue with speed.

Installation
~~~~~~~~~~~~

.. code:: bash

pip install json2parquet

Usage
~~~~~

Here's how to load a random JSON dataset.

.. code:: python

from json2parquet import convert_json

# Infer Schema (requires reading dataset for column names)
convert_json(input_filename, output_filename)

# Given columns
convert_json(input_filename, output_filename, ["my_column", "my_int"])

# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
pa.field('my_column', pa.string),
pa.field('my_int', pa.int64),
])
convert_json(input_filename, output_filename, schema)


You can also work with Python data structures directly


.. code:: python

from json2parquet import load_json, ingest_data, write_parquet, write_parquet_dataset

# Loading JSON to a PyArrow RecordBatch (schema is optional as above)
load_json(input_filename, schema)

# Working with a list of dictionaries
ingest_data(input_data, schema)

# Writing Parquet Files from PyArrow Record Batches
write_parquet(data, destination)

# You can also pass any keyword arguments that PyArrow accepts
write_parquet(data, destination, compression='snappy')

# You can also write partitioned date
write_parquet_dataset(data, destination_dir, partition_cols=["foo", "bar", "baz"])


If you know your schema, you can specify custom datetime formats (only one for now). This formatting will be ignored if you don't pass a PyArrow schema.

.. code:: python

from json2parquet import convert_json

# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
pa.field('my_column', pa.string),
pa.field('my_int', pa.int64),
])
date_format = "%Y-%m-%dT%H:%M:%S.%fZ"
convert_json(input_filename, output_filename, schema, date_format=date_format)


Although ``json2parquet`` can infer schemas, it has helpers to pull in external ones as well

.. code:: python

from json2parquet import load_json
from json2parquet.helpers import get_schema_from_redshift

# Fetch the schema from Redshift (requires psycopg2)
schema = get_schema_from_redshift(redshift_schema, redshift_table, redshift_uri)

# Load JSON with the Redshift schema
load_json(input_filename, schema)


Operational Notes
~~~~~~~~~~~~~~~~~

If you are using this library to convert JSON data to be read by ``Spark``, ``Athena``, ``Spectrum`` or ``Presto`` make sure you use ``use_deprecated_int96_timestamps`` when writing your Parquet files, otherwise you will see some really screwy dates.


Contributing
~~~~~~~~~~~~


Code Changes
------------

- Clone a fork of the library
- Run ``make setup``
- Run ``make test``
- Apply your changes (don't bump version)
- Add tests if needed
- Run ``make test`` to ensure nothing broke
- Submit PR

Documentation Changes
---------------------

It is always a struggle to keep documentation correct and up to date. Any fixes are welcome. If you don't want to clone the repo to work locally, please feel free to edit using Github and to submit Pull Requests via Github's built in features.


.. |Build Status| image:: https://travis-ci.org/andrewgross/json2parquet.svg?branch=master
:target: https://travis-ci.org/andrewgross/json2parquet


Changelog
---------

0.0.156
~~~~~~
- Properly convert Boolean fields passed as numbers to PyArrow booleans.

0.0.15
~~~~~~
- Add support for custom datetime formatting (thanks @Madhu1512)
- Add support for writing partitioned datasets (thanks @mthota15)

0.0.14
~~~~~~
- Stop silencing Redshift errors.

0.0.13
~~~~~~
- Fix decimal type for newer pyarrow versions

0.0.12
~~~~~~
- Allow casting of int64 -> int32

0.0.11
~~~~~~
- Bump PyArrow and allow int32 data

0.0.10
~~~~~~
- Allow passing partition columns when getting a Redshift schema, so they can be skipped

0.0.9
~~~~~~
- Fix conversion of timestamp columns again

0.0.8
~~~~~~
- Fix conversion of timestamp columns

0.0.7
~~~~~~
- Force converted Timestamps to max out at `pandas.Timestamp.max` if they exceed the resolution of `datetime[ns]`

0.0.6
~~~~~~
- Add automatic downcasting for Python ``float`` to ``float32`` via pandas when schema specifies ``pa.float32()``

0.0.5
~~~~~~
- Fix conversion of float types to be size specific

0.0.4
~~~~~~
- Fix ingestion of timestamp data with ns resolution

0.0.3
~~~~~~
- Add pandas dependency
- Add proper ingestion of timestamp data using Pandas ``to_datetime``

0.0.2
~~~~~~
- Fix formatting of README so it displays on PyPI

0.0.1
~~~~~~

- Initial release
- JSON/data writing support
- Redshift Schema reading support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

json2parquet-0.0.16.tar.gz (6.7 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

json2parquet-0.0.16-py3-none-any.whl (9.6 kB view details)

Uploaded Python 3

json2parquet-0.0.16-py2-none-any.whl (9.7 kB view details)

Uploaded Python 2

File details

Details for the file json2parquet-0.0.16.tar.gz.

File metadata

  • Download URL: json2parquet-0.0.16.tar.gz
  • Upload date:
  • Size: 6.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for json2parquet-0.0.16.tar.gz
Algorithm Hash digest
SHA256 76671e60a4b4f885b22bb33c7a0c273d5192f2760995ed061378eb0f3483e0a4
MD5 30960493c1dd89a1f1697d245ee011ae
BLAKE2b-256 a5743edf650f740d719cd34fcd8ed87c172c36e02dc6ee488e0ed1604b09d783

See more details on using hashes here.

File details

Details for the file json2parquet-0.0.16-py3-none-any.whl.

File metadata

File hashes

Hashes for json2parquet-0.0.16-py3-none-any.whl
Algorithm Hash digest
SHA256 4b6b660dffb761ce269748c70af3996f895ecf0e442a3657b10cb73135db0a03
MD5 2e070423fc64a0315475dcd68e753040
BLAKE2b-256 b9934c1b82f0c3097e3eb450f5524c658838241f8752153c7a0316a249dc7ad1

See more details on using hashes here.

File details

Details for the file json2parquet-0.0.16-py2-none-any.whl.

File metadata

File hashes

Hashes for json2parquet-0.0.16-py2-none-any.whl
Algorithm Hash digest
SHA256 db3aaf0696eac8c27b9828a60b898ffa3cf1cf9c03bdb9b0fad83cb4134c4112
MD5 318d3054b825467a96c0d4d3cfcaedc8
BLAKE2b-256 f3070873f547cf5bd183a88871e3a93179645a7d9ac991ba791486a12a3a9a30

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page