Skip to main content

Reinforcement Learning Framework

Project description

English | 简体中文

Documentation Status Documentation Status Documentation Status Release

PARL is a flexible and high-efficient reinforcement learning framework.

About PARL

Features

Reproducible. We provide algorithms that stably reproduce the result of many influential reinforcement learning algorithms.

Large Scale. Ability to support high-performance parallelization of training with thousands of CPUs and multi-GPUs.

Reusable. Algorithms provided in the repository could be directly adapted to a new task by defining a forward network and training mechanism will be built automatically.

Extensible. Build new algorithms quickly by inheriting the abstract class in the framework.

Abstractions

PARL aims to build an agent for training algorithms to perform complex tasks.
The main abstractions introduced by PARL that are used to build an agent recursively are the following:

Model

Model is abstracted to construct the forward network which defines a policy network or critic network given state as input.

Algorithm

Algorithm describes the mechanism to update parameters in Model and often contains at least one model.

Agent

Agent, a data bridge between the environment and the algorithm, is responsible for data I/O with the outside environment and describes data preprocessing before feeding data into the training process.

Note: For more information about base classes, please visit our tutorial and API documentation.

Parallelization

PARL provides a compact API for distributed training, allowing users to transfer the code into a parallelized version by simply adding a decorator. For more information about our APIs for parallel training, please visit our documentation.
Here is a Hello World example to demonstrate how easy it is to leverage outer computation resources.

#============Agent.py=================
@parl.remote_class
class Agent(object):

    def say_hello(self):
        print("Hello World!")

    def sum(self, a, b):
        return a+b

parl.connect('localhost:8037')
agent = Agent()
agent.say_hello()
ans = agent.sum(1,5) # it runs remotely, without consuming any local computation resources

Two steps to use outer computation resources:

  1. use the parl.remote_class to decorate a class at first, after which it is transferred to be a new class that can run in other CPUs or machines.
  2. call parl.connect to initialize parallel communication before creating an object. Calling any function of the objects does not consume local computation resources since they are executed elsewhere.

As shown in the above figure, real actors (orange circle) are running at the cpu cluster, while the learner (blue circle) is running at the local gpu with several remote actors (yellow circle with dotted edge).

For users, they can write code in a simple way, just like writing multi-thread code, but with actors consuming remote resources. We have also provided examples of parallized algorithms like IMPALA, A2C. For more details in usage please refer to these examples.

Install:

Dependencies

  • Python 3.6+(Python 3.8+ is preferable for distributed training).
  • paddlepaddle>=2.3.1 (Optional, if you only want to use APIs related to parallelization alone)
pip install parl

Getting Started

Several-points to get you started:

For beginners who know little about reinforcement learning, we also provide an introductory course: ( Video | Code )

Examples

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

parl-2.1.tar.gz (527.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

parl-2.1-py3-none-any.whl (631.3 kB view details)

Uploaded Python 3

File details

Details for the file parl-2.1.tar.gz.

File metadata

  • Download URL: parl-2.1.tar.gz
  • Upload date:
  • Size: 527.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.13

File hashes

Hashes for parl-2.1.tar.gz
Algorithm Hash digest
SHA256 071babc0fd29cc347e47ca015b4194c71dedef9b80373fa9a8547b32c702f65d
MD5 47346c0eb315ea2c1d9fd27184e7f5a2
BLAKE2b-256 05b70092b90afb52dfce857e178bf7074f810b7f1e865591dad454586c22982e

See more details on using hashes here.

File details

Details for the file parl-2.1-py3-none-any.whl.

File metadata

  • Download URL: parl-2.1-py3-none-any.whl
  • Upload date:
  • Size: 631.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.13

File hashes

Hashes for parl-2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 129e7ed0fea3fb3227616e0b3d47361b3f944e4369e010bd9758750807b82395
MD5 3f0204631c9b150ebf4d4b1758ca3431
BLAKE2b-256 24086cd16396b3b292b81a74dfc7feb2d15bb4016a24dbb81cc803c128a4e522

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page