BricksRL allows the training of custom LEGO robots using deep reinforcement learning. By integrating Pybricks and TorchRL, it facilitates efficient real-world training via Bluetooth communication between LEGO hubs and a local computing device. Check out our paper!
For additional information and building instructions for the robots, view the project page BricksRL.
Click me
- Go to "chrome://flags/"
- enable "Experimental Web Platform features"
- restart chrome
- Use beta.pybricks.com to edit and upload the client scripts for each environment
-
Create a Conda environment:
conda create --name bricksrl python=3.8
-
Activate the environment:
conda activate bricksrl
-
Install PyTorch:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
-
Install bricksrl and additional packages: For regular users, install the package and all required dependencies by running:
pip install -e .
This will install the bricksrl package along with the dependencies listed in setup.py.
-
(Optional) Install development tools:
If you are a developer and need to install development tools (e.g., pytest, ufmt, pre-commit), use the following command to install them as extras:
pip install -e .[dev]
This will install the development dependencies defined in the setup.py file along with the package.
Update your client script on the Pybricks Hub whenever you want to run a new environment with your robot.
Before running experiments, please review and modify the configuration settings according to your needs. Each environment and agent setup has its own specific configuration file under the configs/ directory. For more information checkout the config README.
Robots utilized for our experiments. Building instructions can be found here.
2Wheeler | Walker | RoboArm |
python experiments/walker/train.py
python experiments/walker/eval.py
Click me
With the use of precollected datasets we can pretrain agents with offline RL to perform a task without the need of real world interaction. Such pretrained policies can be evaluated directly or used for later training to fine tuning the pretrained policy on the real robot.
The datasets can be downloaded from huggingface and contain expert and random transitions for the 2Wheeler (RunAway-v0 and Spinning-v0), Walker (Walker-v0) and RoboArm (RoboArm-v0) robots.
git lfs install
git clone git@hf.co:datasets/compsciencelab/BricksRL-Datasets
The datasets consist of TensorDicts containing expert and random transitions, which can be directly loaded into the replay buffer. When initiating (pre-)training, simply provide the path to the desired TensorDict when prompted to load the replay buffer.
The execution of an experiment for offline training is similar to the online training except that you run the pretrain.py script:
python experiments/walker/pretrain.py
Trained policies can then be evaluated as before with:
python experiments/walker/eval.py
Or run training for fine-tuning the policy on the real robot:
python experiments/walker/train.py
Examples to use BricksRL environments with typical training scripts from TorchRL's sota-implementations can be found here.
We also provide a template to create your own custom BricksRL enviornment which subsequently can be used directly in the TorchRL examples.
For more information see the examples readme.
In the example notebook we provide high-level training examples to train a SAC agent in the RoboArmSim-v0 environment and a TD3 agent in the WalkerSim-v0 enviornment. The examples are based on the experiments for our paper. Stand alone examples similar to the TorchRL sota-implementations can be found here.
If you use BricksRL in your work, please refer to this BibTeX entry to cite it:
@article{dittert2024bricksrl,
title={BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO},
author={Sebastian Dittert and Vincent Moens and Gianni De Fabritiis},
journal={arXiv preprint arXiv:2406.17490},
year={2024}
}