Skip to main content

Chik vdf verification (wraps C++)

Project description

Chik VDF

Build PyPI PyPI - Format GitHub

Total alerts Language grade: Python Language grade: C/C++

Building a wheel

Compiling chikvdf requires cmake, boost and GMP/MPIR.

python3 -m venv venv
source venv/bin/activate

pip install wheel setuptools_scm pybind11
pip wheel .

The primary build process for this repository is to use GitHub Actions to build binary wheels for MacOS, Linux (x64 and aarch64), and Windows and publish them with a source wheel on PyPi. See .github/workflows/build.yml. CMake uses FetchContent to download pybind11. Building is then managed by cibuildwheel. Further installation is then available via pip install chikvdf e.g.

Building Timelord and related binaries

In addition to building the required binary and source wheels for Windows, MacOS and Linux, chikvdf can be used to compile vdf_client and vdf_bench. vdf_client is the core VDF process that completes the Proof of Time submitted to it by the Timelord. The repo also includes a benchmarking tool to get a sense of the iterations per second of a given CPU called vdf_bench. Try ./vdf_bench square_asm 250000 for an ips estimate.

To build vdf_client set the environment variable BUILD_VDF_CLIENT to "Y". export BUILD_VDF_CLIENT=Y.

Similarly, to build vdf_bench set the environment variable BUILD_VDF_BENCH to "Y". export BUILD_VDF_BENCH=Y.

This is currently automated via pip in the install-timelord.sh script in the chik-blockchain repository which depends on this repository.

If you're running a timelord, the following tests are available, depending of which type of timelord you are running:

./1weso_test, in case you're running in sanitizer_mode.

./2weso_test, in case you're running a timelord that extends the chain and you're running the slow algorithm.

./prover_test, in case you're running a timelord that extends the chain and you're running the fast algorithm.

Those tests will simulate the vdf_client and verify for correctness the produced proofs.

Contributing and workflow

Contributions are welcome and more details are available in chik-blockchain's CONTRIBUTING.md.

The master branch is the currently released latest version on PyPI. Note that at times chikvdf will be ahead of the release version that chik-blockchain requires in it's master/release version in preparation for a new chik-blockchain release. Please branch or fork master and then create a pull request to the master branch. Linear merging is enforced on master and merging requires a completed review. PRs will kick off a ci build and analysis of chikvdf at lgtm.com. Please make sure your build is passing and that it does not increase alerts at lgtm.

Background from prior VDF competitions

Copyright 2018 Ilya Gorodetskov generic@sundersoft.com

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Our VDF construction is described in classgroup.pdf. The implementation details about squaring and proving phrases are described below.

Main VDF Loop

The main VDF loop produces repeated squarings of the generator form (i.e. calculates y(n) = g^(2^n)) as fast as possible, until the program is interrupted. Sundersoft's entry from Chik's 2nd VDF contest is used, together with the fast reducer used in Pulmark's entry. This approach is described below:

The NUDUPL algorithm is used. The equations are based on cryptoslava's equations from the 1st contest. They were modified slightly to increase the level of parallelism.

The GCD is a custom implementation with scalar integers. There are two base cases: one uses a lookup table with continued fractions and the other uses the euclidean algorithm with a division table. The division table algorithm is slightly faster even though it has about 2x as many iterations.

After the base case, there is a 128 bit GCD that generates 64 bit cofactor matricies with Lehmer's algorithm. This is required to make the long integer multiplications efficient (Flint's implementation doesn't do this).

The GCD also implements Flint's partial xgcd function, but the output is slightly different. This implementation will always return an A value which is > the threshold and a B value which is <= the threshold. For a normal GCD, the threshold is 0, B is 0, and A is the GCD. Also the interfaces are slightly different.

Scalar integers are used for the GCD. I don't expect any speedup for the SIMD integers that were used in the last implementation since the GCD only uses 64x1024 multiplications, which are too small and have too high of a carry overhead for the SIMD version to be faster. In either case, most of the time seems to be spent in the base case so it shouldn't matter too much.

If SIMD integers are used with AVX-512, doubles have to be used because the multiplier sizes for doubles are significantly larger than for integers. There is an AVX-512 extension to support larger integer multiplications but no processor implements it yet. It should be possible to do a 50 bit multiply-add into a 100 bit accumulator with 4 fused multiply-adds if the accumulators have a special nonzero initial value and the inputs are scaled before the multiplication. This would make AVX-512 about 2.5x faster than scalar code for 1024x1024 integer multiplications (assuming the scalar code is unrolled and uses ADOX/ADCX/MULX properly, and the CPU can execute this at 1 cycle per iteration which it probably can't).

The GCD is parallelized by calculating the cofactors in a separate slave thread. The master thread will calculate the cofactor matricies and send them to the slave thread. Other calculations are also parallelized.

The VDF implementation from the first contest is still used as a fallback and is called about once every 5000 iterations. The GCD will encounter large quotients about this often and these are not implemented. This has a negligible effect on performance. Also, the NUDUPL case where A<=L is not implemented; it will fall back to the old implementation in this case (this never happens outside of the first 20 or so iterations).

There is also corruption detection by calculating C with a non-exact division and making sure the remainder is 0. This detected all injected random corruptions that I tested. No corruptions caused by bugs were observed during testing. This cannot correct for the sign of B being wrong.

GCD continued fraction lookup table

The is implemented in gcd_base_continued_fractions.h and asm_gcd_base_continued_fractions.h. The division table implementation is the same as the previous entry and was discussed there. Currently the division table is only used if AVX2 is enabled but it could be ported to SSE or scalar code easily. Both implementations have about the same performance.

The initial quotient sequence of gcd(a,b) is the same as the initial quotient sequence of gcd(a*2^n/b, 2^n) for any n. This is because the GCD quotients are the same as the continued fraction quotients of a/b, and the initial continued fraction quotients only depend on the initial bits of a/b. This makes it feasible to have a lookup table since it now only has one input.

a*2^n/b is calculated by doing a double precision division of a/b, and then truncating the lower bits. Some of the exponent bits are used in the table in addition to the fraction bits; this makes each slot of the table vary in size depending on what the exponent is. If the result is outside the table bounds, then the division result is floored to fall back to the euclidean algorithm (this is very rare).

The table is calculated by iterating all of the possible continued fractions that have a certain initial quotient sequence. Iteration ends when all of these fractions are either outside the table or they don't fully contain at least one slot of the table. Each slot that is fully contained by such a fraction is updated so that its quotient sequence equals the fraction's initial quotient sequence. Once this is complete, the cofactor matricies are calculated from the quotient sequences. Each cofactor matrix is 4 doubles.

The resulting code seems to have too many instructions so it doesn't perform very well. There might be some way to optimize it. It was written for SSE so that it would run on both processors.

This might work better on an FPGA possibly with low latency DRAM or SRAM (compared to the euclidean algorithm with a division table). There is no limit to the size of the table but doubling the latency would require the number of bits in the table to also be doubled to have the same performance.

Other GCD code

The gcd_128 function calculates a 128 bit GCD using Lehmer's algorithm. It is pretty straightforward and uses only unsigned arithmetic. Each cofactor matrix can only have two possible signs: [+ -; - +] or [- +; + -]. The gcd_unsigned function uses unsigned arithmetic and a jump table to apply the 64-bit cofactor matricies to the A and B values. It uses ADOX/ADCX/MULX if they are available and falls back to ADC/MUL otherwise. It will track the last known size of A to speed up the bit shifts required to get the top 128 bits of A.

No attempt was made to try to do the A and B long integer multiplications on a separate thread; I wouldn't expect any performance improvement from this.

Threads

There is a master thread and a slave thread. The slave thread only exists for each batch of 5000 or so squarings and is then destroyed and recreated for the next batch (this has no measurable overhead). If the original VDF is used as a fallback, the batch ends and the slave thread is destroyed.

Each thread has a 64-bit counter that only it can write to. Also, during a squaring iteration, it will not overwrite any value that it has previously written and transmitted to the other thread. Each squaring is split up into phases. Each thread will update its counter at the start of the phase (the counter can only be increased, not decreased). It can then wait on the other thread's counter to reach a certain value as part of a spin loop. If the spin loop takes too long, an error condition is raised and the batch ends; this should prevent any deadlocks from happening.

No CPU fences or atomics are required since each value can only be written to by one thread and since x86 enforces acquire/release ordering on all memory operations. Compiler memory fences are still required to prevent the compiler from caching or reordering memory operations.

The GCD master thread will increment the counter when a new cofactor matrix has been outputted. The slave thread will spin on this counter and then apply the cofactor matrix to the U or V vector to get a new U or V vector.

It was attempted to use modular arithmetic to calculate k directly but this slowed down the program due to GMP's modulo or integer multiply operations not having enough performance. This also makes the integer multiplications bigger.

The speedup isn't very high since most of the time is spent in the GCD base case and these can't be parallelized.

Generating proofs

The nested wesolowski proofs (n-wesolowski) are used to check the correctness of a VDF result. (Simple) Wesolowski proofs are described in A Survey of Two Verifiable Delay Functions. In order to prove h = g^(2^T), a n-wesolowski proof uses n intermediate simple wesolowski proofs. Given h, g, T, t1, t2, ..., tn, h1, h2, ..., hn, a correct n-wesolowski proof will verify the following:

h1 = g^(2^t1)
h2 = h1^(2^t2)
h3 = h2^(2^t3)
...
hn = h(n-1)^(2^tn)

Additionally, we must have:

t1 + t2 + ... + tn = T
hn = h

The algorithm will generate at most 64-wesolowski proofs. Some intermediates wesolowski proofs are stored in parallel with the main VDF loop. The goal is to have a n-wesolowski proof almost ready as soon as the main VDF loop finishes computing h = g^(2^T), for a T that we're interested in. We'll call a segment a tuple (y, x, T) for which we're interested in a simple wesolowski proof that y = x^(2^T). We'll call a segment finished when we've finished computing its proof.

Segmenets stored

We'll store finished segments of length 2^x for x being multiples of 2 greater than or equal to 16. The current implementation limits the maximum segment size to 2^30, but this can be increased if needed. Let P = 16+2*l. After each 2^P steps calculated by the main VDF loop, we'll store a segment proving that we've correctly done the 2^P steps. Formally, let x be the form after k*2^P steps, y be the form after (k+1)*2^P steps, for each k >= 0, for each P = 16+2*l. Then, we'll store a segment (y, x, 2^P), together with a simple wesolowski proof.

Segment threads

In order to finish a segment of length T=2^P, the number of iterations to run for is T/k + l*2^(k+1) and the intermediate storage required is T/(k*l), for some parameters k and l, as described in the paper. The squarings used to finish a segment are about 2 times as slow as the ones used by the main VDF loop. Even so, finishing a segment is much faster than producing its y value by the main VDF loop. This allows, by the time the main VDF loop finishes 2^16 more steps, to perform work on finishing multiple segments.

The parameters used in finishing segments, for T=2^16, are k=10 and l=1. Above that, parameters are k=12 and l=2^(P-18). Note that, for P >= 18, the intermediate storage needed for a segment is constant (i.e. 2^18/12 forms stored in memory).

Prover class is responsible to finish a segment. It implements pause/resume functionality, so its work can be paused, and later resumed from the point it stopped. For each unfinished segment generated by the main VDF loop, a Prover instance is created, which will eventually finish the segment.

Segment threads are responsible for deciding which Prover instance is currently running. In the current implementation, there are 3 segment threads (however the number is configurable), so at most 3 Prover instances will run at once, at different threads (other Provers will be paused). The segment threads will always pick the segments with the shortest length to run. In case of a tie, the segments received the earliest will have priority. Every time a new segment arrives, or a segment gets finished, some pausing/resuming of Provers is done, if needed. Pausing is done to have at most 3 Provers running at any time, whilst resuming is done if less than 3 Provers are working, but some Provers are paused.

All the segments of lengths 2^16, 2^18 and 2^20 will be finished relatively soon after the main VDF worker produced them, while the segments of length 2^22 and upwards will lag behind the main VDF worker a little. Eventually, all the higher size segments will be finished, the work on them being done repeatedly via pausing (when a smaller size segment arrives) and resuming (when all smaller size segments are finished).

Currently, 4 more segment threads are added after the main VDF loop finishes 500 million iterations (after about 1 hour of running). This is done to be completely sure even the very big sized segments will be finished. This optimisation is only allowed on machines supporting at least 16 concurrent threads.

Generating n-wesolowski proof

Let T an iteration we are interested in. Firstly, the main VDF Loop will need to calculate at least T iterations. Then, in order to get fast a n-wesolowski proof, we'll concatenate finished segments. We want the proof to be as short as possible, so we'll always pick finished segments of the maximum length possible. If such segments aren't finished, we'll choose lower length segments. A segment of length 2^(16 + 2*p) can always be replaced with 4 segments of length 2^(16 + 2*p - 2). The proof will be created shortly after the main VDF loop produced the result, as the 2^16 length segments will always be up to date with the main VDF loop (and, at worst case, we can always concatenate 2^16 length segments, if bigger sizes are not finished yet). It's possible after the concatenation that we'll still need to prove up to 2^16 iterations (no segment is able to cover anything less than 2^16). This last work is done in parallel with the main VDF loop, as an optimisation.

The program limits the proof size to 64-wesolowski. If number of iterations is very large, it's possible the concatenation won't fit into this. In this case, the program will attempt again to prove every minute, until there are enough large segments to fit the 64-wesolowski limit. However, almost in all cases, the concatenation will fit the 64-wesolowski limit in the first try.

Since the maximum segment size is 2^30 and we can use at most 64 segments in a concatenation, the program will prove at most 2^36 iterations. This can be increased if needed.

Intermediates storage

In order to finish segments, some intermediate values need to be stored for each segment. For each different possible segment length, we use a sliding window of length 20 to store those. Hence, for each segment length, we'll store only the intermediates values needed for the last 20 segments produced by the main VDF loop. Since finishing segments is faster than producing them by the main VDF loop, we assume the segment threads won't be behind by more than 20 segments from the main VDF loop, for each segment length. Thanks to the sliding window technique, the memory used will always be constant.

Generally, the main VDF loop performs all the storing, after computing a form we're interested in. However, since storing is very frequent and expensive (GMP operations), this will slow down the main VDF loop.

For the machines having at least 16 concurrent threads, an optimization is provided: the main VDF loop does only repeated squaring, without storing any form. After each 2^15 steps are performed, a new thread starts redoing the work for 2^15 more steps, this time storing the intermediate values as well. All the intermediates threads and the main VDF loop will work in parallel. The only purpose of the main VDF loop becomes now to produce the starting values for the intermediate threads, as fast as possible. The squarings used in the intermediates threads will be 2 times slower than the ones used in the main VDF loop. It's expected the intermediates will only lag behind the main VDF loop by 2^15 iterations, at any point: after 2^16 iterations are done by the main VDF loop, the first thread doing the first 2^15 intermediate values is already finished. Also, at that point, half of the work of the second thread doing the last 2^15 intermediates values should be already done.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chikvdf-1.1.0.tar.gz (745.0 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

chikvdf-1.1.0-cp311-cp311-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.11Windows x86-64

chikvdf-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (477.2 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

chikvdf-1.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (398.4 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ ARM64

chikvdf-1.1.0-cp311-cp311-macosx_11_0_x86_64.whl (343.0 kB view details)

Uploaded CPython 3.11macOS 11.0+ x86-64

chikvdf-1.1.0-cp310-cp310-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.10Windows x86-64

chikvdf-1.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (396.6 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ ARM64

chikvdf-1.1.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (462.6 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.12+ x86-64

chikvdf-1.1.0-cp310-cp310-macosx_11_0_x86_64.whl (341.6 kB view details)

Uploaded CPython 3.10macOS 11.0+ x86-64

chikvdf-1.1.0-cp39-cp39-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.9Windows x86-64

chikvdf-1.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (396.8 kB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ ARM64

chikvdf-1.1.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (462.7 kB view details)

Uploaded CPython 3.9manylinux: glibc 2.12+ x86-64

chikvdf-1.1.0-cp39-cp39-macosx_11_0_x86_64.whl (341.7 kB view details)

Uploaded CPython 3.9macOS 11.0+ x86-64

chikvdf-1.1.0-cp38-cp38-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.8Windows x86-64

chikvdf-1.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (396.5 kB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ ARM64

chikvdf-1.1.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (462.5 kB view details)

Uploaded CPython 3.8manylinux: glibc 2.12+ x86-64

chikvdf-1.1.0-cp38-cp38-macosx_11_0_x86_64.whl (341.6 kB view details)

Uploaded CPython 3.8macOS 11.0+ x86-64

File details

Details for the file chikvdf-1.1.0.tar.gz.

File metadata

  • Download URL: chikvdf-1.1.0.tar.gz
  • Upload date:
  • Size: 745.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for chikvdf-1.1.0.tar.gz
Algorithm Hash digest
SHA256 ebec4f76bcd7b2a38d853bf1bf3b29c59b9c656eac7076a04128e10052beaec1
MD5 d62fc711be74c9da85df86ac95e03622
BLAKE2b-256 b6a95564f23b3115989bef3b40f7982d1dde361b9ff290c54a1888e029ffb28c

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: chikvdf-1.1.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for chikvdf-1.1.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 6ece043be03b477357be4ab020893928de4fef2a0d6544d792ce00a5d55cb7ef
MD5 a4d8313546731597f7c2e79a5f63f38c
BLAKE2b-256 4b72f10722b451710cd77527e1ac55b9273e7e1555c5058418619a14c96ae40d

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c3748805dc858177a4b16a1b609f5ebbc4e2d991f54d474ce68d34e93bf0cf97
MD5 bb515a1579c436f9b9df8ea63ffddf6f
BLAKE2b-256 c4595b81a23749c0fc95df992413e68e4d2f02c35f1efefed88d2d60df446170

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 7fb286c1269169f2cffb1321cdae1d6324a63162c70e81e20c259097ffec29dd
MD5 86868c3433e0904339c726182fb8527e
BLAKE2b-256 6ce7a5c7f04e9c78fbe06b8405eb766b4a4b2136dfd17ba649758379502cbc0d

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp311-cp311-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp311-cp311-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 d63e46436b32dee29c4d607c6ff63da89e10a1412fa6f05bbdc36d03ad84990c
MD5 d00eb6c0cb7818e7ad6500cca9589267
BLAKE2b-256 1198d8f04a9e81a449f7903ef87a151a394feacb10bd5ce0392adab932c030bc

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: chikvdf-1.1.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for chikvdf-1.1.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 0e718271aa06cf3e9eb0a44b1968dc96dce4d2bef7b093888d5d0e7b3d629406
MD5 58f4b7cba8a46a4469d1b6d2b7e9ba81
BLAKE2b-256 19815a4114452e373d45a069a0e0af308f03a680784454d2153039d720758d35

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 4d0774fa4bc6b418c4efea5d7a0d081b1174f77ccabe655d4d3601412fa967a1
MD5 6b2723d62339dc9f13465370d84f38a2
BLAKE2b-256 1a887bc8ec7cbdb1c01cb3b1d85352ad662ca7c2f4f25cdd1f696b406b6bd476

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 1ef1dd39e79beb47cae76893720085c8324ad11b0f42478de29125f73af5ff86
MD5 a6b54a55580a36edcaa252d753dedba2
BLAKE2b-256 382bc133e0d92eaf957b46a8a820a7640f2c9a730f31d0c1860eae6bfbf63bef

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp310-cp310-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp310-cp310-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 74ca35b957677bced74f3ff0d1112016c0b0fb4982ffb29da4876d15afceff8a
MD5 b8608fea16a748816fa757de7bc5a88e
BLAKE2b-256 4edc8168f3f71e49db27b3810166da04861ed1fed79aaddf16342dbc95696554

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: chikvdf-1.1.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for chikvdf-1.1.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 e873960cf7cbc9579e7efc3bd4c4acb1d7d74229bd14bfac9f9a50d88239e740
MD5 442ed6e1f29ba6c40f5797a2d4634fbe
BLAKE2b-256 d1a09f6b89211f2300c6213abd53858fd36f10739092182828f9571f49de803d

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 63dc7c1f33535f93e25082362e754f1b9223c96590050ab727d55fedcaa81974
MD5 e24fded3e142b5115f9e97b9637d9d39
BLAKE2b-256 6bb61771dffc74775d4869175dd97b48fbb6f5634b02bf97998c09d4f5fd86ab

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 f80f2d4bb75c3296636adb43e68a1288fef7f27878ad9e6130262907a69681ed
MD5 c1166364d02de0fdc9f2b45b73ec872b
BLAKE2b-256 bde2e0418483b6492ab709d06283229926447ee76bda924c9e5cf42eb09cc5fb

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 abdfb93cdaf6921681bc0256909e016bcb402ecee49758634325eae8f924b728
MD5 6a589628f1f45b1922ba9d28a3c5fa49
BLAKE2b-256 74569a1cbcd8da38dbfc1e2dc46ddc96751bbbcf7ddc0facd15450ac1a6e3dab

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: chikvdf-1.1.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for chikvdf-1.1.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 9ce07c394929a0fef15b6f316a4ef3bd7095161c5b6f31531f96b1bf8b5fe4f6
MD5 ab885f906595a76bf3a8a8487fcd64c2
BLAKE2b-256 b5cb67b03426a21b470ab9cbf193be330397f96fc5cc7dcb7f490077b57476c3

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 ecdc296bfabd6f913f3326fb03969ea105ee178c89bb8c781125e2da277dc33d
MD5 7a7ed24c4593889f1f8f22d2e9596bb8
BLAKE2b-256 b0902b28353fc05dcc4de5882ae7e7209f81b5a397c2cf9f2972945403145907

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 3c71cdbbf8bfac3a974ab8285289dc39b49a7c9eb55dbd5bcc004409df147156
MD5 688b33a608d3c07f450101c21fffc168
BLAKE2b-256 a3ff700641f26c9e884f087cb87634e1b4db34357bf6d902050bb4b06a3806c2

See more details on using hashes here.

File details

Details for the file chikvdf-1.1.0-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for chikvdf-1.1.0-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 8597ada3c286e064e0bd658fcb25c74c4b50eb12cda02fc842363e78054b8181
MD5 a750cf3d1a8bed02fb30dc57aa69b8fa
BLAKE2b-256 2dc93d6fbc84c9e39df6ec794969278b97d7b4508b6780131c6005dcd22df20a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page