Linux
Spin up your node and participate in the swarm in a Linux (Ubuntu 22.04+) environment.
Overview
This guide walks you through setting up RL Swarm on a Linux machine.
Linux provides the most stable and performant environment for RL Swarm, especially for users running NVIDIA GPUs. You can run RL Swarm via Docker for simplicity or directly through Python for more advanced experimentation.
Prerequisites
Make sure your system meets the minimum requirements and that you also have any additional dependencies installed.
- Ubuntu 22.04+ 
- A 64-bit arm64 or x86 CPU with at least 32 GB RAM, or an officially supported NVIDIA GPU (3090, 4090, 5090, A100, H100) 
- Python 3.10+ 
- Docker installed and configured 
- Stable internet connection 
- Git installed 
Installing Dependencies
First, update your package lists and install all required dependencies:
sudo apt updatesudo apt install -y python3 python3-venv python3-pip curl wget git docker.io build-essentialNext, start and enable the Docker service so it launches automatically on boot using the following commands:
sudo systemctl enable dockersudo systemctl start dockerYou can verify Docker is running with sudo docker info. 
If this command returns information about the Docker daemon, it's running successfully.
Configuring Docker
If you are using Docker Desktop, ensure that enough memory is allocated to containers. You can do this by going to Settings > Resources > Advanced > Memory Limit and setting the memory value to the highest available value.
To check if you can run containers, run the following command to print a 'hello' message. If you see a success message, your installation is good to go.
sudo docker run hello-worldClone the RL Swarm Repository
- Navigate to your home directory and clone the RL Swarm GitHub repository using this command: 
git clone https://github.com/gensyn-ai/rl-swarm.git- Then move into the project folder: 
cd rl-swarmRun RL Swarm
Depending on your hardware, you can run RL Swarm in either CPU or GPU mode.
For CPU-only setup:
docker compose run –rm –build -Pit swarm-cpuFor GPU-enabled setup (officially supported on NVIDIA devices):
docker compose run –rm –build -Pit swarm-gpuGPU mode requires NVIDIA drivers and CUDA toolkit properly installed.

If you encounter an error saying “docker-compose: command not found”, use “docker compose” (without the hyphen) instead.
Log into RL Swarm
When you start RL Swarm, it will open a browser window automatically pointing to http://localhost:3000.
You will see the RL Swarm login screen powered by Alchemy. From here, you can log in using your preferred method such as Google or email.

After login, a swarm.pem file will be created in your repository folder. This identifies your peer on the Gensyn Testnet.
Huggingface
If you would like to upload your model to Hugging Face, enter your Hugging Face access token when prompted. You can generate one from your Hugging Face account, under Access Tokens.
Verify your Node
Once logged in, your node will begin training automatically.
You can verify that your peer has successfully connected by visiting the Gensyn Testnet Dashboard. Your peer should appear in the active swarm list, and you can monitor training progress in real time.

Optional: Experimental Mode (No Docker)
If you want to experiment with the GenRL library or the configurable parameters, we recommend you run RL Swarm via shell script:
python3 -m venv .venv
source .venv/bin/activate
./run_rl_swarm.shTo learn more about experimental mode, check out our getting started guide on Github.
Troubleshooting
Refer to the multi-platform RL Swarm Troubleshooting guide for unblocking information and fixes to common set-up issues.
If you need additional support, you can open a ticket or visit our Discord.
Last updated
