Skip to main content

A subpackage of Ray which provides the Ray C++ API.

Project description

https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png https://readthedocs.org/projects/ray/badge/?version=master https://img.shields.io/badge/Ray-Join%20Slack-blue https://img.shields.io/badge/Discuss-Ask%20Questions-blue https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter

Ray provides a simple, universal API for building distributed applications.

Ray is packaged with the following libraries for accelerating machine learning workloads:

  • Tune: Scalable Hyperparameter Tuning

  • RLlib: Scalable Reinforcement Learning

  • Train: Distributed Deep Learning (beta)

  • Datasets: Distributed Data Loading and Compute

As well as libraries for taking ML and distributed apps to production:

  • Serve: Scalable and Programmable Serving

  • Workflows: Fast, Durable Application Flows (alpha)

There are also many community integrations with Ray, including Dask, MARS, Modin, Horovod, Hugging Face, Scikit-learn, and others. Check out the full list of Ray distributed libraries here.

Install Ray with: pip install ray. For nightly wheels, see the Installation page.

Quick Start

Execute Python functions in parallel.

import ray
ray.init()

@ray.remote
def f(x):
    return x * x

futures = [f.remote(i) for i in range(4)]
print(ray.get(futures))

To use Ray’s actor model:

import ray
ray.init()

@ray.remote
class Counter(object):
    def __init__(self):
        self.n = 0

    def increment(self):
        self.n += 1

    def read(self):
        return self.n

counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures))

Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download this configuration file, and run:

ray submit [CLUSTER.YAML] example.py --start

Read more about launching clusters.

Tune Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png

Tune is a library for hyperparameter tuning at any scale.

To run this example, you will need to install the following:

$ pip install "ray[tune]"

This example runs a parallel grid search to optimize an example objective function.

from ray import tune


def objective(step, alpha, beta):
    return (0.1 + alpha * step / 100)**(-1) + beta * 0.1


def training_function(config):
    # Hyperparameters
    alpha, beta = config["alpha"], config["beta"]
    for step in range(10):
        # Iterative training function - can be any arbitrary training procedure.
        intermediate_score = objective(step, alpha, beta)
        # Feed the score back back to Tune.
        tune.report(mean_loss=intermediate_score)


analysis = tune.run(
    training_function,
    config={
        "alpha": tune.grid_search([0.001, 0.01, 0.1]),
        "beta": tune.choice([1, 2, 3])
    })

print("Best config: ", analysis.get_best_config(metric="mean_loss", mode="min"))

# Get a dataframe for analyzing trial results.
df = analysis.results_df

If TensorBoard is installed, automatically visualize all trial results:

tensorboard --logdir ~/ray_results

RLlib Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/rllib/images/rllib-logo.png

RLlib is an industry-grade library for reinforcement learning (RL), built on top of Ray. It offers high scalability and unified APIs for a variety of industry- and research applications.

$ pip install "ray[rllib]" tensorflow  # or torch
import gym
from ray.rllib.agents.ppo import PPOTrainer


# Define your problem using python and openAI's gym API:
class SimpleCorridor(gym.Env):
    """Corridor in which an agent must learn to move right to reach the exit.

    ---------------------
    | S | 1 | 2 | 3 | G |   S=start; G=goal; corridor_length=5
    ---------------------

    Possible actions to chose from are: 0=left; 1=right
    Observations are floats indicating the current field index, e.g. 0.0 for
    starting position, 1.0 for the field next to the starting position, etc..
    Rewards are -0.1 for all steps, except when reaching the goal (+1.0).
    """

    def __init__(self, config):
        self.end_pos = config["corridor_length"]
        self.cur_pos = 0
        self.action_space = gym.spaces.Discrete(2)  # left and right
        self.observation_space = gym.spaces.Box(0.0, self.end_pos, shape=(1,))

    def reset(self):
        """Resets the episode and returns the initial observation of the new one.
        """
        self.cur_pos = 0
        # Return initial observation.
        return [self.cur_pos]

    def step(self, action):
        """Takes a single step in the episode given `action`

        Returns:
            New observation, reward, done-flag, info-dict (empty).
        """
        # Walk left.
        if action == 0 and self.cur_pos > 0:
            self.cur_pos -= 1
        # Walk right.
        elif action == 1:
            self.cur_pos += 1
        # Set `done` flag when end of corridor (goal) reached.
        done = self.cur_pos >= self.end_pos
        # +1 when goal reached, otherwise -1.
        reward = 1.0 if done else -0.1
        return [self.cur_pos], reward, done, {}


# Create an RLlib Trainer instance.
trainer = PPOTrainer(
    config={
        # Env class to use (here: our gym.Env sub-class from above).
        "env": SimpleCorridor,
        # Config dict to be passed to our custom env's constructor.
        "env_config": {
            # Use corridor with 20 fields (including S and G).
            "corridor_length": 20
        },
        # Parallelize environment rollouts.
        "num_workers": 3,
    })

# Train for n iterations and report results (mean episode rewards).
# Since we have to move at least 19 times in the env to reach the goal and
# each move gives us -0.1 reward (except the last move at the end: +1.0),
# we can expect to reach an optimal episode reward of -0.1*18 + 1.0 = -0.8
for i in range(5):
    results = trainer.train()
    print(f"Iter: {i}; avg. reward={results['episode_reward_mean']}")

After training, you may want to perform action computations (inference) in your environment. Here is a minimal example on how to do this. Also check out our more detailed examples here (in particular for normal models, LSTMs, and attention nets).

# Perform inference (action computations) based on given env observations.
# Note that we are using a slightly different env here (len 10 instead of 20),
# however, this should still work as the agent has (hopefully) learned
# to "just always walk right!"
env = SimpleCorridor({"corridor_length": 10})
# Get the initial observation (should be: [0.0] for the starting position).
obs = env.reset()
done = False
total_reward = 0.0
# Play one episode.
while not done:
    # Compute a single action, given the current observation
    # from the environment.
    action = trainer.compute_single_action(obs)
    # Apply the computed action in the environment.
    obs, reward, done, info = env.step(action)
    # Sum up rewards for reporting purposes.
    total_reward += reward
# Report results.
print(f"Played 1 episode; total-reward={total_reward}")

Ray Serve Quick Start

https://raw.githubusercontent.com/ray-project/ray/master/doc/source/serve/logo.svg

Ray Serve is a scalable model-serving library built on Ray. It is:

  • Framework Agnostic: Use the same toolkit to serve everything from deep learning models built with frameworks like PyTorch or Tensorflow & Keras to Scikit-Learn models or arbitrary business logic.

  • Python First: Configure your model serving declaratively in pure Python, without needing YAMLs or JSON configs.

  • Performance Oriented: Turn on batching, pipelining, and GPU acceleration to increase the throughput of your model.

  • Composition Native: Allow you to create “model pipelines” by composing multiple models together to drive a single prediction.

  • Horizontally Scalable: Serve can linearly scale as you add more machines. Enable your ML-powered service to handle growing traffic.

To run this example, you will need to install the following:

$ pip install scikit-learn
$ pip install "ray[serve]"

This example runs serves a scikit-learn gradient boosting classifier.

import pickle
import requests

from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier

from ray import serve

serve.start()

# Train model.
iris_dataset = load_iris()
model = GradientBoostingClassifier()
model.fit(iris_dataset["data"], iris_dataset["target"])

@serve.deployment(route_prefix="/iris")
class BoostingModel:
    def __init__(self, model):
        self.model = model
        self.label_list = iris_dataset["target_names"].tolist()

    async def __call__(self, request):
        payload = await request.json()["vector"]
        print(f"Received flask request with data {payload}")

        prediction = self.model.predict([payload])[0]
        human_name = self.label_list[prediction]
        return {"result": human_name}


# Deploy model.
BoostingModel.deploy(model)

# Query it!
sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
response = requests.get("http://localhost:8000/iris", json=sample_request_input)
print(response.text)
# Result:
# {
#  "result": "versicolor"
# }

More Information

Older documents:

Getting Involved

  • Forum: For discussions about development, questions about usage, and feature requests.

  • GitHub Issues: For reporting bugs.

  • Twitter: Follow updates on Twitter.

  • Slack: Join our Slack channel.

  • Meetup Group: Join our meetup group.

  • StackOverflow: For questions about how to use Ray.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ray_cpp-1.12.0-cp39-cp39-win_amd64.whl (18.4 MB view details)

Uploaded CPython 3.9Windows x86-64

ray_cpp-1.12.0-cp39-cp39-manylinux2014_x86_64.whl (30.9 MB view details)

Uploaded CPython 3.9

ray_cpp-1.12.0-cp39-cp39-macosx_11_0_arm64.whl (26.6 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

ray_cpp-1.12.0-cp39-cp39-macosx_10_15_x86_64.whl (28.9 MB view details)

Uploaded CPython 3.9macOS 10.15+ x86-64

ray_cpp-1.12.0-cp38-cp38-win_amd64.whl (18.4 MB view details)

Uploaded CPython 3.8Windows x86-64

ray_cpp-1.12.0-cp38-cp38-manylinux2014_x86_64.whl (30.9 MB view details)

Uploaded CPython 3.8

ray_cpp-1.12.0-cp38-cp38-macosx_11_0_arm64.whl (26.6 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

ray_cpp-1.12.0-cp38-cp38-macosx_10_15_x86_64.whl (28.9 MB view details)

Uploaded CPython 3.8macOS 10.15+ x86-64

ray_cpp-1.12.0-cp37-cp37m-win_amd64.whl (18.4 MB view details)

Uploaded CPython 3.7mWindows x86-64

ray_cpp-1.12.0-cp37-cp37m-manylinux2014_x86_64.whl (30.9 MB view details)

Uploaded CPython 3.7m

ray_cpp-1.12.0-cp37-cp37m-macosx_10_15_intel.whl (28.9 MB view details)

Uploaded CPython 3.7mmacOS 10.15+ Intel (x86-64, i386)

ray_cpp-1.12.0-cp36-cp36m-win_amd64.whl (18.4 MB view details)

Uploaded CPython 3.6mWindows x86-64

ray_cpp-1.12.0-cp36-cp36m-manylinux2014_x86_64.whl (30.9 MB view details)

Uploaded CPython 3.6m

ray_cpp-1.12.0-cp36-cp36m-macosx_10_15_intel.whl (28.9 MB view details)

Uploaded CPython 3.6mmacOS 10.15+ Intel (x86-64, i386)

File details

Details for the file ray_cpp-1.12.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.12.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 18.4 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.10

File hashes

Hashes for ray_cpp-1.12.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 b23e6565b9409f8cbb38b4489d18321f9d8814b53b721e9b9987d5950d33de79
MD5 19e2bd1c0cb1341c3bc6b8cba8eb57ba
BLAKE2b-256 9c0c6e06182f42340db425522468405fbfcb205b57d7880c63cf43caef5ab566

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.0-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 03d9d0417bd9451c0a33488a8262f8063691181ddcbd5aedb836dd6a635800e4
MD5 54b1d036e3783cc2fcde511c1383c095
BLAKE2b-256 52d3524b48c07a7ec37f277ef29a81acbadc771c0f535febb41d2176d33c8609

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.0-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 618edcdbef7990b0058732068eebf3a9b2c872115c6eb15baf6780a292f69ed5
MD5 0757871d9dcfb1df6a9aabd434f78de3
BLAKE2b-256 da650ae0006254719fc0e3bac370b5881245ee510318acdaae382bf12e9fd14f

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp39-cp39-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.0-cp39-cp39-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 f4c1dec32449dc9a032d4e08342c69567a29c3291099a2c9e809375ead870798
MD5 93948959f27eee0c360470b0a8f05e2d
BLAKE2b-256 692482cf61f92a4bb91ddbea5dd62a8234d8f4e5a26af5a413293352ec456a2d

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.12.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 18.4 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.10

File hashes

Hashes for ray_cpp-1.12.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 b61d3f1932dfdfb5fa3e95f287f87f1e858d6fcbe6d3b264af6e6745b52c96bb
MD5 8e2b9f536eb2a145a69daa08e98baf30
BLAKE2b-256 7f84b227b6f0bd3184b371d4fa2b112da0d02efe62009553a2ef093ef032e087

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.0-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f72f5e5fc188269a41fb4d8a6280586a242b5e918a28c54f22d22614fc9203bb
MD5 fa0d3b7bd0c1c46592d6a541d60e3dd5
BLAKE2b-256 0cf960cbb7cc569ffdf0645afab0ff246ec6d02d43208057546bd4516a6097d0

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.0-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 57303963bc9289b8fa2bd5e6d8e83371ccb6aed7f5c327b8e0bd040f07bafa7d
MD5 3d6eedf054a4f66b43284f533b5d24b7
BLAKE2b-256 4a341f21ce8281675787f75ca94b62b173b921e20ca79e266ccaf49499267c75

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp38-cp38-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.0-cp38-cp38-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 b5ce633260b32a1f6e512020992ac9da74a997aaf3a41f372da9ebc7d012fb54
MD5 8b5d02acafe4531bdeef51deeb108a05
BLAKE2b-256 04f4b8c27c847e5490a17aace10fbd8721bc39b34795080f232f678fc6112fb3

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.12.0-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 18.4 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.10

File hashes

Hashes for ray_cpp-1.12.0-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 3f8adb1ba44a8c891adcfaca3c52aaf4653e68f9220c801110b0fd064ec45270
MD5 e287ad19cc248ce34edc823fea4eb552
BLAKE2b-256 9f03d4b68e3371706c2c12d5b6de32688f03e8deb7fb553ebdd8cb4dc7af41d3

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp37-cp37m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.0-cp37-cp37m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7673a14a28968ca90695635d8cb02b22db63e9710ac383327d9b3662a0236122
MD5 e1d28b0a41f7775c9728663d8040abfa
BLAKE2b-256 50f3c066ba649d0b3d3e701b85a3d86c4ab470366741c0e20335e88ee1fb9db6

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp37-cp37m-macosx_10_15_intel.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.0-cp37-cp37m-macosx_10_15_intel.whl
Algorithm Hash digest
SHA256 fe1b46316763d91448e06ab64cb5d70e1d64d5a88827e5c6c97a6fd6f6457bf4
MD5 a4a30958fd565a1a5d10645562d7603d
BLAKE2b-256 6eb48020cc6fb4fc79b077291cbe5e3e726a28e74300ecf501d14119b0ccd12b

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp36-cp36m-win_amd64.whl.

File metadata

  • Download URL: ray_cpp-1.12.0-cp36-cp36m-win_amd64.whl
  • Upload date:
  • Size: 18.4 MB
  • Tags: CPython 3.6m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.10

File hashes

Hashes for ray_cpp-1.12.0-cp36-cp36m-win_amd64.whl
Algorithm Hash digest
SHA256 d2b4a96a87b53cb966edfd44ede0363ec6c70d756cacec86eb5d012209168681
MD5 b4f36efc6cbfd723cd070b51c54404c3
BLAKE2b-256 1052a3d4102490dc61c7f57de339ad5bb131f935cc1db65e5dc011bcc2f2f80f

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp36-cp36m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.0-cp36-cp36m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c5c4efd59064f42ff1ad720a66fa14a2307c4bd797c31b2ec29905193348f535
MD5 8613b61ce0fe1899419bdb84969e1169
BLAKE2b-256 d20e60399fe9eed5e1c515e96aec41a35840cba776a50a066b28c817c853c2eb

See more details on using hashes here.

File details

Details for the file ray_cpp-1.12.0-cp36-cp36m-macosx_10_15_intel.whl.

File metadata

File hashes

Hashes for ray_cpp-1.12.0-cp36-cp36m-macosx_10_15_intel.whl
Algorithm Hash digest
SHA256 9b94ad05247d9c3e42d181c3462814db63c281f3a0fed553def64f5018002f85
MD5 5b23346fc2fa3a9881ffdf0c5a49d055
BLAKE2b-256 8e3b575f526a1b9b19700c4b18ee12ff3d88e838e840b77a3812526cac04fd76

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page