Building blocks for spacy Matcher patterns
Project description
corpus-patterns
A preparatory utils library.
Create a custom tokenizer
from corpus_patterns import set_tokenizer
nlp = spacy.blank("en")
nlp.tokenizer = set_tokenizer(nlp)
The tokenizer:
- Removes dashes from infixes
- Adds prefix/suffix rules for parenthesis/brackets
- Adds special exceptions to treat dotted text as a single token
Add .jsonl files to directory
Each file will contain lines of spacy matcher patterns.
from corpus_patterns import create_rules
from pathlib import Path
create_rules(folder=Path(Path("location-here"))) # check directory
Search database for text fragments
from corpus_patterns import set_txtcat_jsonl_files
jsonl_dir = set_txtcat_jsonl_files() # returns directory `ASSET_PATH/txtcats`
Custom loader for main database queries (for prodigy)
See purpose in prodigy docs:
from corpus_patterns import fts
fts('"police power"', limit=10) # note the FTS search expression
Utils
annotate_fragments()
- given an nlp object and some*.txt
files, create a single annotation*.jsonl
fileextract_lines_from_txt_files()
- accepts an iterator of*.txt
files and yields each line (after sorting the same and ensuring uniqueness of content).split_data()
- given a list of text strings, split the same into two groups and return a dictionary containing these groups based on the ratio provided (defaults to 0.80)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
corpus_patterns-0.0.6.tar.gz
(17.4 kB
view hashes)
Built Distribution
Close
Hashes for corpus_patterns-0.0.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8945385528ea1e65639b9e9c02fb8f84261435ebf42858828ddbc9a8599c931e |
|
MD5 | 25f82a758797fb09e1f867fa6541b9fc |
|
BLAKE2b-256 | 2335b3c8c6907142d3a0afd537f41e12dd03f17c0eee1473afca2f6cd4c3ad8f |