Skip to main content

The RWTH extensible training framework for universal recurrent neural networks

Project description

GitHub repository. RETURNN paper 2016, RETURNN paper 2018.

RETURNN - RWTH extensible training framework for universal recurrent neural networks, is a PyTorch/TensorFlow-based implementation of modern recurrent neural network architectures. It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.

The high-level features and goals of RETURNN are:

  • Simplicity

    • Writing config / code is simple & straight-forward (setting up experiment, defining model)

    • Debugging in case of problems is simple

    • Reading config / code is simple (defined model, training, decoding all becomes clear)

  • Flexibility

    • Allow for many different kinds of experiments / models

  • Efficiency

    • Training speed

    • Decoding speed

All items are important for research, decoding speed is esp. important for production.

See our Interspeech 2020 tutorial “Efficient and Flexible Implementation of Machine Learning for ASR and MT” video (slides) with an introduction of the core concepts.

More specific features include:

  • Mini-batch training of feed-forward neural networks

  • Sequence-chunking based batch training for recurrent neural networks

  • Long short-term memory recurrent neural networks including our own fast CUDA kernel

  • Multidimensional LSTM (GPU only, there is no CPU version)

  • Memory management for large data sets

  • Work distribution across multiple devices

  • Flexible and fast architecture which allows all kinds of encoder-attention-decoder models

See documentation. See basic usage and technological overview.

Here is the video recording of a RETURNN overview talk (slides, exercise sheet; hosted by eBay).

There are many example demos which work on artificially generated data, i.e. they should work as-is.

There are some real-world examples such as setups for speech recognition on the Switchboard or LibriSpeech corpus.

Some benchmark setups against other frameworks can be found here. The results are in the RETURNN paper 2016. Performance benchmarks of our LSTM kernel vs CuDNN and other TensorFlow kernels are in TensorFlow LSTM benchmark.

There is also a wiki. Questions can also be asked on StackOverflow using the RETURNN tag.

https://github.com/rwth-i6/returnn/workflows/CI/badge.svg

Dependencies

pip dependencies are listed in requirements.txt and requirements-dev, although some parts of the code may require additional dependencies (e.g. librosa, resampy) on-demand.

RETURNN supports Python >= 3.8. Bumps to the minimum Python version are listed in CHANGELOG.md.

TensorFlow-based setups require TensorFlow >= 2.2.

PyTorch-based setups require Torch >= 1.0.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

returnn-1.20260212.162227.tar.gz (2.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

returnn-1.20260212.162227-py3-none-any.whl (1.6 MB view details)

Uploaded Python 3

File details

Details for the file returnn-1.20260212.162227.tar.gz.

File metadata

  • Download URL: returnn-1.20260212.162227.tar.gz
  • Upload date:
  • Size: 2.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for returnn-1.20260212.162227.tar.gz
Algorithm Hash digest
SHA256 ecc7e0e07b3ec89cd2ad2678087d51b3158d277fcbdb4b6f4af32c4341740d5c
MD5 a35ed7d9496c1ddff717c5de4080d37f
BLAKE2b-256 4bb3e371b1d57453cb83d2fe1af3efe949de54865230c45329be3349d2f96947

See more details on using hashes here.

File details

Details for the file returnn-1.20260212.162227-py3-none-any.whl.

File metadata

File hashes

Hashes for returnn-1.20260212.162227-py3-none-any.whl
Algorithm Hash digest
SHA256 2084e2858f28e501cbdbd95fbff378a7b6ff9078927e93f8ab30b3bb08cd5129
MD5 b63044f50782ddfe83c5f71672f26bfb
BLAKE2b-256 c69151053e08e7dd913f6d0d8aa4163618a99a57e463f453e046827dd8a4cda4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page