MotrixLab is a reinforcement learning framework based on the MotrixSim simulation engine, designed specifically for robot simulation and training. This project provides a complete reinforcement learning development platform that integrates multiple simulation environments and training frameworks.
The project is divided into two core components:
- motrix_envs: Various RL simulation environments built on MotrixSim, defining observation, action, and reward. Framework-agnostic and currently supports MotrixSim's CPU backend
- motrix_rl: Integrates RL frameworks and uses various environment parameters from motrix_envs for training. Currently supports SKRL framework's PPO algorithm
Documentation: https://motrixlab.readthedocs.io
- Unified Interface: Provides a concise and unified reinforcement learning training and evaluation interface
- Multi-backend Support: Supports JAX and PyTorch training backends, with flexible selection based on hardware environment
- Rich Environments: Includes various robot simulation environments such as basic control, locomotion, and manipulation tasks
- High-performance Simulation: Built on MotrixSim's high-performance physics simulation engine
- Visual Training: Supports real-time rendering and training process visualization
The following examples use the Python project management tool: UV
Before starting, please install this tool.
git clone https://github.com/Motphys/MotrixLab
cd MotrixLab
git lfs pullInstall all dependencies:
uv sync --all-packages --all-extrasSKRL framework supports JAX(Flax) or PyTorch as training backends. You can also choose to install only one training backend based on your hardware environment:
Install JAX as training backend (Linux only):
uv sync --all-packages --extra skrl-jaxInstall PyTorch as training backend:
uv sync --all-packages --extra skrl-torchView environments without executing training:
uv run scripts/view.py --env cartpoleuv run scripts/train.py --env cartpoleTraining results are saved in the runs/{env-name}/ directory.
View training data through TensorBoard:
uv run tensorboard --logdir runs/{env-name}uv run scripts/play.py --env cartpoleFor more usage methods, please refer to the User Documentation
Have questions or suggestions? Feel free to contact us through:
- GitHub Issues: Submit Issues
- Discussions: Join Discussion