Skip to content

BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO

License

Notifications You must be signed in to change notification settings

BricksRL/bricksrl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BricksRL

CI Python arXiv Website Discord

BricksRL allows the training of custom LEGO robots using deep reinforcement learning. By integrating Pybricks and TorchRL, it facilitates efficient real-world training via Bluetooth communication between LEGO hubs and a local computing device. Check out our paper!

For additional information and building instructions for the robots, view the project page BricksRL.

Prerequisites

Click me

Enable web Bluetooth on chrome

  1. Go to "chrome://flags/"
  2. enable "Experimental Web Platform features"
  3. restart chrome
  4. Use beta.pybricks.com to edit and upload the client scripts for each environment

Environment Setup

  1. Create a Conda environment:

    conda create --name bricksrl python=3.8
  2. Activate the environment:

    conda activate bricksrl
  3. Install PyTorch:

    pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
    
  4. Install bricksrl and additional packages: For regular users, install the package and all required dependencies by running:

    pip install -e .

    This will install the bricksrl package along with the dependencies listed in setup.py.

  5. (Optional) Install development tools:

    If you are a developer and need to install development tools (e.g., pytest, ufmt, pre-commit), use the following command to install them as extras:

    pip install -e .[dev]

    This will install the development dependencies defined in the setup.py file along with the package.

Usage

Client

Update your client script on the Pybricks Hub whenever you want to run a new environment with your robot.

Config

Before running experiments, please review and modify the configuration settings according to your needs. Each environment and agent setup has its own specific configuration file under the configs/ directory. For more information checkout the config README.

Robots

Robots utilized for our experiments. Building instructions can be found here.

2wheeler Walker RoboArm
2Wheeler Walker RoboArm

Run Experiments

Train an Agent

python experiments/walker/train.py

Evaluate an Agent

python experiments/walker/eval.py

Results

Click me

Evaluation videos of the trained agents can be found here.

2Wheeler Results:

2Wheeler Results

Walker Results:

Walker Results

RoboArm Results:

RoboArm Results RoboArm Mixed Results

Offline RL

Click me

With the use of precollected datasets we can pretrain agents with offline RL to perform a task without the need of real world interaction. Such pretrained policies can be evaluated directly or used for later training to fine tuning the pretrained policy on the real robot.

Datasets

The datasets can be downloaded from huggingface and contain expert and random transitions for the 2Wheeler (RunAway-v0 and Spinning-v0), Walker (Walker-v0) and RoboArm (RoboArm-v0) robots.

   git lfs install
   git clone git@hf.co:datasets/compsciencelab/BricksRL-Datasets

The datasets consist of TensorDicts containing expert and random transitions, which can be directly loaded into the replay buffer. When initiating (pre-)training, simply provide the path to the desired TensorDict when prompted to load the replay buffer.

Pretrain an Agent

The execution of an experiment for offline training is similar to the online training except that you run the pretrain.py script:

python experiments/walker/pretrain.py

Trained policies can then be evaluated as before with:

python experiments/walker/eval.py

Or run training for fine-tuning the policy on the real robot:

python experiments/walker/train.py

Examples

TorchRL and Custom Environment Examples

Examples to use BricksRL environments with typical training scripts from TorchRL's sota-implementations can be found here.

We also provide a template to create your own custom BricksRL enviornment which subsequently can be used directly in the TorchRL examples.

For more information see the examples readme.

High-Level Examples

In the example notebook we provide high-level training examples to train a SAC agent in the RoboArmSim-v0 environment and a TD3 agent in the WalkerSim-v0 enviornment. The examples are based on the experiments for our paper. Stand alone examples similar to the TorchRL sota-implementations can be found here.

Citation

If you use BricksRL in your work, please refer to this BibTeX entry to cite it:

@article{dittert2024bricksrl,
  title={BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO},
  author={Sebastian Dittert and Vincent Moens and Gianni De Fabritiis},
  journal={arXiv preprint arXiv:2406.17490},
  year={2024}
}

About

BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages