Learn more
Here are some talks, papers, and press coverage involving Ray and its
libraries. Please raise an issue if any of the below links are broken,
or if you\'d like to add your own talk!
Blog and Press
- Modern Parallel and Distributed Python: A Quick Tutorial on Ray
- Why Every Python Developer Will Love Ray
- Ray: A Distributed System for AI (BAIR)
- 10x Faster Parallel Python Without Python Multiprocessing
- Implementing A Parameter Server in 15 Lines of Python with Ray
- Ray Distributed AI Framework Curriculum
- RayOnSpark: Running Emerging AI Applications on Big Data Clusters with Ray and Analytics Zoo
- First user tips for Ray
- [Tune] Tune: a Python library for fast hyperparameter tuning at any scale
- [Tune] Cutting edge hyperparameter tuning with Ray Tune
- [RLlib] New Library Targets High Speed Reinforcement Learning
- [RLlib] Scaling Multi Agent Reinforcement Learning
- [RLlib] Functional RL with Keras and Tensorflow Eager
- [Modin] How to Speed up Pandas by 4x with one line of code
- [Modin] Quick Tip -- Speed up Pandas using Modin
- Ray Blog
Talks (Videos)
- [Unifying Large Scale Data Preprocessing and Machine Learning Pipelines with Ray Datasets | PyData 2021][] (slides)
- [Programming at any Scale with Ray | SF Python Meetup Sept 2019]
- [Ray for Reinforcement Learning | Data Council 2019]
- Scaling Interactive Pandas Workflows with Modin
- [Ray: A Distributed Execution Framework for AI | SciPy 2018]
- [Ray: A Cluster Computing Engine for Reinforcement Learning Applications | Spark Summit]
- [RLlib: Ray Reinforcement Learning Library | RISECamp 2018]
- [Enabling Composition in Distributed Reinforcement Learning | Spark Summit 2018]
- [Tune: Distributed Hyperparameter Search | RISECamp 2018]
Slides
Papers
- Ray 1.0 Architecture whitepaper (new)
- Ray Design Patterns (new)
- RLlib paper
- RLlib flow paper
- Tune paper
Older papers: