Skip to content

Motphys/MotrixLab

Repository files navigation

Language: English | 简体中文

MotrixLab

GitHub License Python Version

MotrixLab is a reinforcement learning framework based on the MotrixSim simulation engine, designed specifically for robot simulation and training. This project provides a complete reinforcement learning development platform that integrates multiple simulation environments and training frameworks.

Project Overview

The project is divided into two core components:

  • motrix_envs: Various RL simulation environments built on MotrixSim, defining observation, action, and reward. Framework-agnostic and currently supports MotrixSim's CPU backend
  • motrix_rl: Integrates RL frameworks and uses various environment parameters from motrix_envs for training. Currently supports SKRL framework's PPO algorithm

Documentation: https://motrixlab.readthedocs.io

Key Features

  • Unified Interface: Provides a concise and unified reinforcement learning training and evaluation interface
  • Multi-backend Support: Supports JAX and PyTorch training backends, with flexible selection based on hardware environment
  • Rich Environments: Includes various robot simulation environments such as basic control, locomotion, and manipulation tasks
  • High-performance Simulation: Built on MotrixSim's high-performance physics simulation engine
  • Visual Training: Supports real-time rendering and training process visualization

🚀 Quick Start

The following examples use the Python project management tool: UV

Before starting, please install this tool.

Clone Repository

git clone https://github.com/Motphys/MotrixLab

cd MotrixLab

git lfs pull

Install Dependencies

Install all dependencies:

uv sync --all-packages --all-extras

SKRL framework supports JAX(Flax) or PyTorch as training backends. You can also choose to install only one training backend based on your hardware environment:

Install JAX as training backend (Linux only):

uv sync --all-packages --extra skrl-jax

Install PyTorch as training backend:

uv sync --all-packages --extra skrl-torch

🎯 Usage Guide

Environment Visualization

View environments without executing training:

uv run scripts/view.py --env cartpole

Model Training

uv run scripts/train.py --env cartpole

Training results are saved in the runs/{env-name}/ directory.

View training data through TensorBoard:

uv run tensorboard --logdir runs/{env-name}

Model Inference

uv run scripts/play.py --env cartpole

For more usage methods, please refer to the User Documentation

📬 Contact

Have questions or suggestions? Feel free to contact us through:

About

A general-purpose machine learning architecture designed for robot training

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages