Skip to main content

Tokenize Chinese characters

Project description

jieba_pyfast

A Chinese text segmentation module, from jieba_fast, with wheels for python3.9 & python3.10

Installation

You can install the latest stable version via:

$ pip install jieba_pyfast

Main Functions

For details, see https://github.com/fxsjy/jieba

Usage

>>> import jieba_pyfast as jieba
>>> jieba.lcut('下雨天留客天留我不留')
['下雨天', '留客', '天留', '我', '不留']

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

jieba_pyfast-3.12.0-cp312-cp312-musllinux_1_1_x86_64.whl (5.1 MB view hashes)

Uploaded CPython 3.12 musllinux: musl 1.1+ x86-64

jieba_pyfast-3.12.0-cp312-cp312-musllinux_1_1_i686.whl (5.1 MB view hashes)

Uploaded CPython 3.12 musllinux: musl 1.1+ i686

jieba_pyfast-3.12.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.1 MB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.17+ x86-64

jieba_pyfast-3.12.0-cp312-cp312-manylinux_2_17_i686.manylinux_2_5_i686.manylinux1_i686.manylinux2014_i686.whl (5.1 MB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.17+ i686 manylinux: glibc 2.5+ i686

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page