Skip to main content

Routines and classes supporting MongoDB environments

Project description

jaraco.mongodb

https://img.shields.io/pypi/v/jaraco.mongodb.svg https://img.shields.io/pypi/pyversions/jaraco.mongodb.svg https://img.shields.io/pypi/dm/jaraco.mongodb.svg https://img.shields.io/travis/jaraco/jaraco.mongodb/master.svg

migration manager

jaraco.mongodb.migration implements the Migration Manager as featured at the MongoWorld 2016 presentation From the Polls to the Trolls. Use it to load documents of various schema versions into a target version that your application expects.

sessions

jaraco.mongodb.sessions implements a CherryPy Sessions store backed by MongoDB.

By default, the session store will handle sessions with any objects that can be inserted into a MongoDB collection naturally.

To support richer objects, one may configure the codec to use jaraco.modb.

monitor-index-creation

To monitor an ongoing index operation in a server, simply invoke:

python -m jaraco.mongodb.monitor-index-creation mongodb://host/db

move-gridfs

To move files from one gridfs collection to another, invoke:

python -m jaraco.mongodb.move-gridfs –help

And follow the usage for moving all or some gridfs files and optionally deleting the files after.

oplog

This package provides an oplog module, which is based on the mongooplog-alt project, which itself is a Python remake of official mongooplog utility, shipped with MongoDB starting from version 2.2.0. It reads oplog of a remote server, and applies operations to the local server. This can be used to keep independed replica set loosly synced in a sort of one way replication, and may be useful in various backup and migration scenarios.

oplog implements basic functionality of the official utility and adds following features:

  • tailable oplog reader: runs forever polling new oplog event which is extremly useful for keeping two independent replica sets in almost real-time sync.

  • option to sync only selected databases/collections.

  • option to exclude one or more namespaces (i.e. dbs or collections) from being synced.

  • ability to “rename” dbs/collections on fly, i.e. destination namespaces can differ from the original ones. This feature

  • works on mongodb 1.8 and later. Official utility only supports version 2.2.x and higher.

  • save last processed timestamp to file, resume from saved point later.

Invoke the command as a module script: python -m jaraco.mongodb.oplog.

Command-line options

–source <hostname><:port>

Hostname of the mongod server from which oplog operations are going to be pulled. Called “–from” in mongooplog.

—dest <hostname><:port>

Hostname of the mongod server to which oplog operations are going to be applied. Default is “localhost”. Called “–host” in mongooplog.

—window WINDOW

Time window to query, like “3 days” or “24:00”

—follow, -f

Wait for new data in oplog. Makes the utility polling oplog forever (until interrupted). New data is going to be applied immediately with at most one second delay.

—exclude, -x

List of space separated namespaces which should be ignored. Can be in form of dname or dbname.collection. May be specified multiple times.

—ns

Process only these namespaces, ignoring all others. Space separated list of strings in form of dname or dbname.collection. May be specified multiple times.

—rename [ns_old=ns_new [ns_old=ns_new …]]

Rename database(s) and/or collection(s). Operations on namespace ns_old from the source server will be applied to namespace ns_new on the destination server. May be specified multiple times.

—resume-file FILENAME

Read from and write to this file the last processed timestamp.

-s SECONDS, --seconds SECONDS

Seconds in the past to query. Overrides any value indicated by a resume file. Deprecated, use window instead.

Example usages

Consider the following sample usage:

python -m jaraco.mongodb.oplog --source prod.example.com:28000 --dest dev.example.com:28500 -f --exclude logdb data.transactions --seconds 600

This command is going to take operations from the last 10 minutes from prod, and apply them to dev. Database logdb and collection transactions of data database will be omitted. After operations for the last minutes will be applied, command will wait for new changes to come, keep running until Ctrl+C or other termination signal recieved.

The tool provides a --dry-run option and when logging at the DEBUG level will emit the oplog entries. Combine these to use the tool as an oplog cat tool:

$ python -m jaraco.mongodb.oplog --dry-run -s 0 -f --source prod.example.com --ns survey_tabs -l DEBUG

Testing

BuildStatus

Tests for oplog are written in javascript using test harness which is used for testing MongoDB iteself. You can run the oplog suite with:

mongo tests/oplog.js

Tests produce alot of output. Succesful execution ends with line like this:

ReplSetTest stopSet *** Shut down repl set - test worked ****

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jaraco.mongodb-5.2.1.tar.gz (29.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

jaraco.mongodb-5.2.1-py2.py3-none-any.whl (31.4 kB view details)

Uploaded Python 2Python 3

File details

Details for the file jaraco.mongodb-5.2.1.tar.gz.

File metadata

File hashes

Hashes for jaraco.mongodb-5.2.1.tar.gz
Algorithm Hash digest
SHA256 ffec5a623ec253014decff62bbce5d61f6ac8b8f4a727b087cf4ba8a44ab53ad
MD5 de41bc0ae52138abe95ab9ba8741b8e5
BLAKE2b-256 ac3e82397fc114093ad39ded0fabf579243d2c4ec131ea3a74f754ed70689896

See more details on using hashes here.

File details

Details for the file jaraco.mongodb-5.2.1-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for jaraco.mongodb-5.2.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 ca44dc2bd0c27463c39e4fff15c62b953205cc7419bf016e71ea5124f9665cb3
MD5 bb0ae7e1cc4b78319553b8c3021af3da
BLAKE2b-256 9d94cf182b7f3c08acf57bc954520c06228e049b71cac4e8c3a5a927d4d27d18

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page