Tools for downloading and preserving MediaWikis. We archive MediaWikis, from Wikipedia to tiniest wikis.
Project description
wikiteam3
Countless MediaWikis are still waiting to be archived.
Image by @gledos
wikiteam3 is a fork of mediawiki-scraper.
Why we fork mediawiki-scraper
Originally, mediawiki-scraper was named wikiteam3, but wikiteam upstream (py2 version) suggested that the name should be changed to avoid confusion with the original wikiteam.
Half a year later, we didn't see any py3 porting progress in the original wikiteam, and mediawiki-scraper lacks "code" reviewers.
So, we decided to break that suggestion, fork and named it back to wikiteam3, put the code here, and release it to pypi wildly.
Everything still under GPLv3 license.
Installation/Upgrade
pip install wikiteam3 --upgrade
Dumpgenerator usage
Downloading a wiki with complete XML history and images
wikiteam3dumpgenerator http://wiki.domain.org --xml --images
[!WARNING]
NTFS/Windowsusers please note: When using--images, because NTFS does not allow characters such as:*?"<>|in filenames, some files may not be downloaded, please pay attention to theXXXXX could not be created by OSerror in yourerrors.log. We will not make special treatment for NTFS/EncFS "path too long/illegal filename", highly recommend you to use ext4/xfs/btrfs, etc.- Introducing the "illegal filename rename" mechanism will bring complexity. WikiTeam(python2) had this before, but it caused more problems, so it was removed in WikiTeam3. - It will cause confusion to the final user of wikidump (usually the Wiki site administrator). - NTFS is not suitable for large-scale image dump with millions of files in a single directory.(Windows background service will occasionally scan the whole disk, we think there should be no users using WIN/NTFS to do large-scale MediaWiki archive) - Using other file systems can solve all problems.
Manually specifying api.php and/or index.php
If the script can't find itself the api.php and/or index.php paths, then you can provide them:
wikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --xml --images
wikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --index http://wiki.domain.org/w/index.php \
--xml --images
If you only want the XML histories, just use --xml. For only the images, just --images. For only the current version of every page, --xml --curonly.
Resuming an incomplete dump
wikiteam3dumpgenerator \
--api http://wiki.domain.org/w/api.php --xml --images --resume --path /path/to/incomplete-dump
In the above example, --path is only necessary if the download path (wikidump dir) is not the default.
[!NOTE]
en: When resuming an incomplete dump, the configuration in
config.jsonwill override the CLI parameters. (But not all CLI parameters will be ignored, checkconfig.jsonfor details)
wikiteam3dumpgenerator will also ask you if you want to resume if it finds an incomplete dump in the path where it is downloading.
Using wikiteam3uploader
Requirements
[!NOTE]
Please make sure you have the following requirements before using
wikiteam3uploader, and you don't need to install them if you don't wanna upload the dump to IA.
-
unbinded localhost port 62954 (for multiple processes compressing queue)
-
3GB+ RAM (~2.56GB for commpressing)
-
64-bit OS (required by 2G
wlogsize) -
7z(binary)Debian/Ubuntu: install
p7zip-full[!NOTE]
Windows: install https://7-zip.org and add
7z.exeto PATH -
zstd(binary)1.5.5+ (recommended), v1.5.0-v1.5.4(DO NOT USE), 1.4.8 (minimum)
install from https://github.com/facebook/zstd[!NOTE]
Windows: add
zstd.exeto PATH
Uploader usage
[!NOTE]
Read
wikiteam3uploader --helpand do not forget~/.wikiteam3_ia_keys.txtbefore usingwikiteam3uploader.
wikiteam3uploader {YOUR_WIKI_DUMP_PATH}
Checking dump integrity
TODO: xml2titles.py
If you want to check the XML dump integrity, type this into your command line to count title, page and revision XML tags:
grep -E '<title(.*?)>' *.xml -c; grep -E '<page(.*?)>' *.xml -c; grep \
"</page>" *.xml -c;grep -E '<revision(.*?)>' *.xml -c;grep "</revision>" *.xml -c
You should see something similar to this (not the actual numbers) - the first three numbers should be the same and the last two should be the same as each other:
580
580
580
5677
5677
If your first three numbers or your last two numbers are different, then, your XML dump is corrupt (it contains one or more unfinished </page> or </revision>). This is not common in small wikis, but large or very large wikis may fail at this due to truncated XML pages while exporting and merging. The solution is to remove the XML dump and re-download, a bit boring, and it can fail again.
import wikidump to MediaWiki / wikidump data tips
[!IMPORTANT]
In the article name, spaces and underscores are treated as equivalent and each is converted to the other in the appropriate context (underscore in URL and database keys, spaces in plain text). https://www.mediawiki.org/wiki/Manual:Title.php#Article_name
[!NOTE]
WikiTeam3useszstdto compress.xmland.txtfiles, and7zto pack images (media files).
zstdis a very fast stream compression algorithm, you can usezstd -dto decompress.zstfile/steam.
Contributors
WikiTeam is the Archive Team [GitHub] subcommittee on wikis. It was founded and originally developed by Emilio J. Rodríguez-Posada, a Wikipedia veteran editor and amateur archivist. Thanks to people who have helped, especially to: Federico Leva, Alex Buie, Scott Boyd, Hydriz, Platonides, Ian McEwen, Mike Dupont, balr0g and PiRSquared17.
Mediawiki-Scraper The Python 3 initiative is currently being led by Elsie Hupp, with contributions from Victor Gambier, Thomas Karcher, Janet Cobb, yzqzss, NyaMisty and Rob Kam
WikiTeam3 Every archivist who has uploaded a wikidump to the Internet Archive.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wikiteam3-4.2.3.tar.gz.
File metadata
- Download URL: wikiteam3-4.2.3.tar.gz
- Upload date:
- Size: 86.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.3.1 CPython/3.10.12 Linux/5.15.0-97-generic
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bccc9fa26ef6e5eb42ac922953727122fb0da20423f592db35bb91e75bd873b2
|
|
| MD5 |
1c3c4e3473d2b1153bc083138024fdb2
|
|
| BLAKE2b-256 |
3df5d25459fd7ca2143a071201b2338589851728e47af766c1202042254a13c9
|
File details
Details for the file wikiteam3-4.2.3-py3-none-any.whl.
File metadata
- Download URL: wikiteam3-4.2.3-py3-none-any.whl
- Upload date:
- Size: 102.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.3.1 CPython/3.10.12 Linux/5.15.0-97-generic
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6ab5c8c4f70f36a84eebfadddd573d75e14700f532548e6d1dfd6913d5e069e8
|
|
| MD5 |
5d4739573f38146a544da57d436f1f3e
|
|
| BLAKE2b-256 |
182e1c08a9336333f94e2c541ba2f73c192adab83a2fdefda6a9e0817aec8597
|