Skip to content

A highly modular and extensible PyTorch-based reinforcement learning library.

License

Notifications You must be signed in to change notification settings

zakaria-narjis/modularl

Repository files navigation

ModulaRL

ModulaRL Logo
🚧 This library is still under construction. 🚧

Code style: black pytest Documentation Status License: MIT

ModulaRL is a highly modular and extensible reinforcement learning library built on PyTorch. It aims to provide researchers and developers with a flexible framework for implementing, experimenting with, and extending various RL algorithms.

Features

  • Modular architecture allowing easy component swapping and extension
  • Efficient implementations leveraging PyTorch's capabilities
  • Integration with TorchRL for optimized replay buffers
  • Clear documentation and examples for quick start
  • Designed for both research and practical applications in reinforcement learning

TODO

  • Add new algorithms
  • Add exploration modules
  • Add experiment wrapper modules

Installation

pip install modularl

Algorithms Implemented

Algorithm Type Paper Continuous Action Discrete Action
SAC (Soft Actor-Critic) Off-policy Haarnoja et al. 2018 Not implemented YET
TD3 (Twin Delayed DDPG) Off-policy Fujimoto et al. 2018 Not implemented YET
DDPG (Deep Deterministic Policy Gradient) Off-policy Lillicrap et al. 2015 Not implemented YET

Citation

@software{modularl2024,
  author = {zakaria narjis},
  title = {ModulaRL: A Modular Reinforcement Learning Library},
  year = {2024},
  url = {https://github.com/zakaria-narjis/modularl}
}