Skip to main content

Tools for downloading and preserving MediaWikis. We archive MediaWikis, from Wikipedia to tiniest wikis.

Project description

wikiteam3

Dynamic JSON Badge PyPI version

Countless MediaWikis are still waiting to be archived.

Image by @gledos

wikiteam3 is a fork of mediawiki-scraper.

Why we fork mediawiki-scraper

Originally, mediawiki-scraper was named wikiteam3, but wikiteam upstream (py2 version) suggested that the name should be changed to avoid confusion with the original wikiteam.
Half a year later, we didn't see any py3 porting progress in the original wikiteam, and mediawiki-scraper lacks "code" reviewers.
So, we decided to break that suggestion, fork and named it back to wikiteam3, put the code here, and release it to pypi wildly.

Everything still under GPLv3 license.

Installation/Upgrade

pip install wikiteam3 --upgrade

Dumpgenerator usage

Downloading a wiki with complete XML history and images

wikiteam3dumpgenerator http://wiki.domain.org --xml --images

[!WARNING]

NTFS/Windows users please note: When using --images, because NTFS does not allow characters such as :*?"<>| in filenames, some files may not be downloaded, please pay attention to the XXXXX could not be created by OS error in your errors.log. We will not make special treatment for NTFS/EncFS "path too long/illegal filename", highly recommend you to use ext4/xfs/btrfs, etc.

- Introducing the "illegal filename rename" mechanism will bring complexity. WikiTeam(python2) had this before, but it caused more problems, so it was removed in WikiTeam3. - It will cause confusion to the final user of wikidump (usually the Wiki site administrator). - NTFS is not suitable for large-scale image dump with millions of files in a single directory.(Windows background service will occasionally scan the whole disk, we think there should be no users using WIN/NTFS to do large-scale MediaWiki archive) - Using other file systems can solve all problems.

Manually specifying api.php and/or index.php

If the script can't find itself the api.php and/or index.php paths, then you can provide them:

wikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --xml --images
wikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --index http://wiki.domain.org/w/index.php \
    --xml --images

If you only want the XML histories, just use --xml. For only the images, just --images. For only the current version of every page, --xml --curonly.

Resuming an incomplete dump

wikiteam3dumpgenerator \
    --api http://wiki.domain.org/w/api.php --xml --images --resume --path /path/to/incomplete-dump

In the above example, --path is only necessary if the download path (wikidump dir) is not the default.

[!NOTE]

en: When resuming an incomplete dump, the configuration in config.json will override the CLI parameters. (But not all CLI parameters will be ignored, check config.json for details)

wikiteam3dumpgenerator will also ask you if you want to resume if it finds an incomplete dump in the path where it is downloading.

Using wikiteam3uploader

Requirements

[!NOTE]

Please make sure you have the following requirements before using wikiteam3uploader, and you don't need to install them if you don't wanna upload the dump to IA.

  • unbinded localhost port 62954 (for multiple processes compressing queue)

  • 3GB+ RAM (~2.56GB for commpressing)

  • 64-bit OS (required by 2G wlog size)

  • 7z (binary)

    Debian/Ubuntu: install p7zip-full

    [!NOTE]

    Windows: install https://7-zip.org and add 7z.exe to PATH

  • zstd (binary)

    1.5.5+ (recommended), v1.5.0-v1.5.4(DO NOT USE), 1.4.8 (minimum)
    install from https://github.com/facebook/zstd

    [!NOTE]

    Windows: add zstd.exe to PATH

Uploader usage

[!NOTE]

Read wikiteam3uploader --help and do not forget ~/.wikiteam3_ia_keys.txt before using wikiteam3uploader.

wikiteam3uploader {YOUR_WIKI_DUMP_PATH}

Checking dump integrity

TODO: xml2titles.py

If you want to check the XML dump integrity, type this into your command line to count title, page and revision XML tags:

grep -E '<title(.*?)>' *.xml -c; grep -E '<page(.*?)>' *.xml -c; grep \
    "</page>" *.xml -c;grep -E '<revision(.*?)>' *.xml -c;grep "</revision>" *.xml -c

You should see something similar to this (not the actual numbers) - the first three numbers should be the same and the last two should be the same as each other:

580
580
580
5677
5677

If your first three numbers or your last two numbers are different, then, your XML dump is corrupt (it contains one or more unfinished </page> or </revision>). This is not common in small wikis, but large or very large wikis may fail at this due to truncated XML pages while exporting and merging. The solution is to remove the XML dump and re-download, a bit boring, and it can fail again.

import wikidump to MediaWiki / wikidump data tips

[!IMPORTANT]

In the article name, spaces and underscores are treated as equivalent and each is converted to the other in the appropriate context (underscore in URL and database keys, spaces in plain text). https://www.mediawiki.org/wiki/Manual:Title.php#Article_name

[!NOTE]

WikiTeam3 uses zstd to compress .xml and .txt files, and 7z to pack images (media files).
zstd is a very fast stream compression algorithm, you can use zstd -d to decompress .zst file/steam.

Contributors

WikiTeam is the Archive Team [GitHub] subcommittee on wikis. It was founded and originally developed by Emilio J. Rodríguez-Posada, a Wikipedia veteran editor and amateur archivist. Thanks to people who have helped, especially to: Federico Leva, Alex Buie, Scott Boyd, Hydriz, Platonides, Ian McEwen, Mike Dupont, balr0g and PiRSquared17.

Mediawiki-Scraper The Python 3 initiative is currently being led by Elsie Hupp, with contributions from Victor Gambier, Thomas Karcher, Janet Cobb, yzqzss, NyaMisty and Rob Kam

WikiTeam3 Every archivist who has uploaded a wikidump to the Internet Archive.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wikiteam3-4.2.0.tar.gz (84.6 kB view hashes)

Uploaded Source

Built Distribution

wikiteam3-4.2.0-py3-none-any.whl (100.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page