RL Reach Documentation¶
RL Reach is a toolbox for running reproducible reinforcement learning experiments applied to solving the reaching task with a robot manipulator. The training Gym environments are adapted from the Replab project. The training scripts and RL algorithms are based on the Stable Baselines 3 implementation and its Zoo of trained agents.
Useful links¶
Github repository: https://github.com/PierreExeter/rl_reach
CodeOcean capsule: https://codeocean.com/capsule/4112840/tree/
Software Impacts publication: https://www.sciencedirect.com/science/article/pii/S2665963821000099
ArXiv ePrint: https://arxiv.org/abs/2102.04916
Travis builds: https://travis-ci.com/github/PierreExeter/rl_reach
DockerHub CPU image: https://hub.docker.com/r/rlreach/rlreach-cpu
DockerHub GPU image: https://hub.docker.com/r/rlreach/rlreach-gpu
Citation¶
Please cite this work as follows:
@article{aumjaud2021a,
author = {Aumjaud, Pierre and McAuliffe, David and Rodriguez-Lera, Francisco J and Cardiff, Philip},
journal = {Software Impacts},
pages = {100061},
volume = {8},
title = {{rl{\_}reach: Reproducible reinforcement learning experiments for robotic reaching tasks}},
archivePrefix = {arXiv},
arxivId = {2102.04916},
doi = {https://doi.org/10.1016/j.simpa.2021.100061},
year = {2021}
}