Skip to main content

The RWTH extensible training framework for universal recurrent neural networks

Project description

GitHub repository. RETURNN paper 2016, RETURNN paper 2018.

RETURNN - RWTH extensible training framework for universal recurrent neural networks, is a PyTorch/TensorFlow-based implementation of modern recurrent neural network architectures. It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.

The high-level features and goals of RETURNN are:

  • Simplicity

    • Writing config / code is simple & straight-forward (setting up experiment, defining model)

    • Debugging in case of problems is simple

    • Reading config / code is simple (defined model, training, decoding all becomes clear)

  • Flexibility

    • Allow for many different kinds of experiments / models

  • Efficiency

    • Training speed

    • Decoding speed

All items are important for research, decoding speed is esp. important for production.

See our Interspeech 2020 tutorial “Efficient and Flexible Implementation of Machine Learning for ASR and MT” video (slides) with an introduction of the core concepts.

More specific features include:

  • Mini-batch training of feed-forward neural networks

  • Sequence-chunking based batch training for recurrent neural networks

  • Long short-term memory recurrent neural networks including our own fast CUDA kernel

  • Multidimensional LSTM (GPU only, there is no CPU version)

  • Memory management for large data sets

  • Work distribution across multiple devices

  • Flexible and fast architecture which allows all kinds of encoder-attention-decoder models

See documentation. See basic usage and technological overview.

Here is the video recording of a RETURNN overview talk (slides, exercise sheet; hosted by eBay).

There are many example demos which work on artificially generated data, i.e. they should work as-is.

There are some real-world examples such as setups for speech recognition on the Switchboard or LibriSpeech corpus.

Some benchmark setups against other frameworks can be found here. The results are in the RETURNN paper 2016. Performance benchmarks of our LSTM kernel vs CuDNN and other TensorFlow kernels are in TensorFlow LSTM benchmark.

There is also a wiki. Questions can also be asked on StackOverflow using the RETURNN tag.

https://github.com/rwth-i6/returnn/workflows/CI/badge.svg

Dependencies

pip dependencies are listed in requirements.txt and requirements-dev, although some parts of the code may require additional dependencies (e.g. librosa, resampy) on-demand.

RETURNN supports Python >= 3.8. Bumps to the minimum Python version are listed in CHANGELOG.md.

TensorFlow-based setups require TensorFlow >= 2.2.

PyTorch-based setups require Torch >= 1.0.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

returnn-1.20260211.100735.tar.gz (2.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

returnn-1.20260211.100735-py3-none-any.whl (1.5 MB view details)

Uploaded Python 3

File details

Details for the file returnn-1.20260211.100735.tar.gz.

File metadata

  • Download URL: returnn-1.20260211.100735.tar.gz
  • Upload date:
  • Size: 2.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for returnn-1.20260211.100735.tar.gz
Algorithm Hash digest
SHA256 13728732940e8668b16be0141b89e03675a4b80d6714a57af3d8fc400270b8fd
MD5 d3c5f339da72cee27ee756e14ed7e396
BLAKE2b-256 6c3619b9c718be8797d594b8b084bf89185924dda4587fc167bd9afcf6dea1be

See more details on using hashes here.

File details

Details for the file returnn-1.20260211.100735-py3-none-any.whl.

File metadata

File hashes

Hashes for returnn-1.20260211.100735-py3-none-any.whl
Algorithm Hash digest
SHA256 7f6d0541667d63fc5ada6db779b0adfae1b236b4efb99bd7a97846691d301a4f
MD5 098f1cb4268db91e981169e051e92634
BLAKE2b-256 9ca6f379d576357e9d8e5302c34f46e673366454c90d7f46b06ca03d0bc96a8f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page