This is an implementation of concepts and algorithms described in "Reinforcement Learning: An Introduction" (Sutton and Barto, 2018, 2nd edition). It is a work in progress, implemented with the following objectives in mind.
- Complete conceptual and algorithmic coverage: Implement all concepts and algorithms described in the text, plus some.
- Minimal dependencies: All computation specific to the text is implemented here.
- Complete test coverage: All implementations are paired with unit tests.
- General-purpose design: The text provides concise pseudocode that is not difficult to implement for the examples covered; however, such implementations do not necessarily lead to reusable and extensible code that is generally applicable beyond such examples. The approach taken here should be generally applicable well beyond the text.
Please see the project website for a nicer version of this page.
For single-click access to a graphical interface for RLAI, please click below:
Note that Binder notebooks are hosted for free by sponsors who donate computational infrastructure. Limitations are placed on each notebook, so don't expect the Binder interface to support heavy workloads. See the following section for alternatives.
RLAI requires swig
and ffmpeg
to be installed on the system. These can be installed using a package manager on your
OS (e.g., Homebrew for macOS, apt
for Ubuntu, etc.). If installing with Homebrew on macOS, then you might need to add
an environment variable pointing to ffmpeg as follows:
echo 'export IMAGEIO_FFMPEG_EXE="/opt/homebrew/bin/ffmpeg"' >> ~/.bash_profile
The RLAI code is distributed via PyPI. There are several ways to use the package.
-
JupyterLab notebook: Most of the RLAI functionality is exposed via the companion JupyterLab notebook. See the JupyterLab guide for more information.
-
Package dependency: See the example repository for how a project can be structured to consume the RLAI package functionality within source code.
-
Command-line interface: Using RLAI from the command-line interface (CLI) is demonstrated in the case studies below and is also explored in the CLI guide.
-
See here for how to use RLAI on a Raspberry Pi system.
Looking for a place to dig in? Below are a few ideas organized by area of interest.
-
Explore new Gym environments: Gym provides a wide range of interesting environments, and experimenting with them can be as simple as modifying an existing training command (e.g., the one for inverted pendulum) and replacing the
--gym-id
with something else. Other changes might be needed depending on the environment, but Gym is particularly convenient. -
Incorporate new statistical learning methods: The RLAI SKLearnSGD module demonstrates how to use methods in scikit-learn (in this case stochastic gradient descent regression) to approximate state-action value functions. This is just one approach, and it would be interesting to compare time, memory, and reward performance with a nonparametric approach like KNN regression.
-
Feel free to ask questions, submit issues, and submit pull requests.
- Diagnostic and interpretation tools: Diagnostic and interpretation tools become critical as the environment and agent increase in complexity (e.g., from tabular methods in small, discrete-space gridworlds to value function approximation methods in large, continuous-space control problems). Such tools can be found here.
The gridworld and other simple environments (e.g., gambler's problem) are used throughout the package to develop, implement, and test algorithmic concepts. Sutton and Barto do a nice job of explaining how reinforcement learning works for these environments. Below is a list of environments that are not covered in as much detail (e.g., the mountain car) or are not covered at all (e.g., Robocode). They are more difficult to train agents for and are instructive for understanding how agents are parameterized and rewarded.
Gymnasium is a collection of environments that range from traditional control to advanced robotics. Case studies have been developed for the following environments, which are ordered roughly by increasing complexity:
- Inverted Pendulum
- Acrobot
- Mountain Car
- Mountain Car with Continuous Control
- Lunar Lander with Continuous Control
- MuJoCo Swimming Worm with Continuous Control
- A follow-up using process-level parallelization for faster, better results.
- See the MuJoCo section below for tips on installing MuJoCo.
RLAI works with MuJoCo either via Gymnasium described above or directly via the MuJoCo-provided Python bindings. On macOS, see here for how to fix OpenGL errors.
Robocode is a simulation-based robotic combat programming game with a dynamically rich environment, multi-agent teaming, and a large user community. Read more here.
A list of figures can be found here. Most of these are reproductions of those shown in the Sutton and Barto text; however, even the reproductions typically provide detail not shown in the text.
See here.
- Begin the next prerelease number within the current prerelease phase (e.g.,
0.1.0a0
→0.1.0a1
):OLD_VERSION=$(poetry version --short) poetry version prerelease VERSION=$(poetry version --short) git commit -a -m "Next prerelease number: ${OLD_VERSION} → ${VERSION}" git push
- Begin the next prerelease phase (e.g.,
0.1.0a1
→0.1.0b0
):The phases progress as alpha (OLD_VERSION=$(poetry version --short) poetry version prerelease --next-phase VERSION=$(poetry version --short) git commit -a -m "Next prerelease phase: ${OLD_VERSION} → ${VERSION}" git push
a
), beta (b
), and release candidate (rc
), each time resetting to a prerelease number of 0. Afterrc
, the prerelease suffix (e.g.,rc3
) is stripped, leaving themajor.minor.patch
version. - Release the next minor version (e.g.,
0.1.0b1
→0.1.0
):OLD_VERSION=$(poetry version --short) poetry version minor VERSION=$(poetry version --short) git commit -a -m "New minor release: ${OLD_VERSION} → ${VERSION}" git push
- Release the next major version (e.g.,
0.1.0a0
→2.0.0
):OLD_VERSION=$(poetry version --short) poetry version major VERSION=$(poetry version --short) git commit -a -m "New major release: ${OLD_VERSION} → ${VERSION}" git push
- Tag the current version:
VERSION=$(poetry version --short) git tag -a -m "rlai v${VERSION}" "v${VERSION}" git push --follow-tags
- Begin the next minor prerelease (e.g.,
0.1.0
→0.2.0a0
):OLD_VERSION=$(poetry version --short) poetry version preminor VERSION=$(poetry version --short) git commit -a -m "Next minor prerelease: ${OLD_VERSION} → ${VERSION}" git push