# Supported Models

## Verified-Compatible Models

The following models have been verified to work with REE. Any Hugging Face model compatible with the system *may* work, but this list represents models that have been explicitly tested.

If you have a specific model in mind that doesn't work with REE, you can reach out to the Gensyn team by creating an issue in the [GitHub repository](https://github.com/gensyn-ai/ree/issues) and we'll do our best to support it.

{% hint style="info" %}
Models above \~32B typically require pipeline parallelism to run.&#x20;

Use the `--n-partitions` flag to split the model across multiple GPUs. See [Pipeline Parallelism](https://docs.gensyn.ai/tech/ree/advanced-usage#pipeline-parallelism) for details.
{% endhint %}

### Qwen

| Model                              | Parameters |
| ---------------------------------- | ---------- |
| `Qwen/Qwen2.5-72B-Instruct`        | 72B        |
| `Qwen/Qwen3-32B`                   | 32B        |
| `Qwen/Qwen3-8B`                    | 8B         |
| `Qwen/Qwen3-4B`                    | 4B         |
| `Qwen/Qwen3-1.7B`                  | 1.7B       |
| `Qwen/Qwen3-0.6B`                  | 0.6B       |
| `Qwen/Qwen2.5-32B-Instruct`        | 32B        |
| `Qwen/Qwen2.5-14B-Instruct`        | 14B        |
| `Qwen/Qwen2.5-7B-Instruct`         | 7B         |
| `Qwen/Qwen2.5-7B`                  | 7B         |
| `Qwen/Qwen2.5-3B-Instruct`         | 3B         |
| `Qwen/Qwen2.5-0.5B-Instruct`       | 0.5B       |
| `Qwen/Qwen2.5-0.5B`                | 0.5B       |
| `Qwen/Qwen2.5-Coder-7B-Instruct`   | 7B         |
| `Qwen/Qwen2.5-Coder-0.5B-Instruct` | 0.5B       |
| `Qwen/Qwen2-1.5B-Instruct`         | 1.5B       |

### Meta Llama

| Model                                 | Parameters |
| ------------------------------------- | ---------- |
| `meta-llama/Llama-3.1-8B-Instruct`    | 8B         |
| `meta-llama/Llama-3.1-8B`             | 8B         |
| `meta-llama/Meta-Llama-3-8B`          | 8B         |
| `meta-llama/Meta-Llama-3-8B-Instruct` | 8B         |
| `meta-llama/Llama-3.2-3B-Instruct`    | 3B         |
| `meta-llama/Llama-3.2-1B-Instruct`    | 1B         |
| `meta-llama/Llama-3.2-1B`             | 1B         |
| `meta-llama/Llama-3.1-70B-Instruct`   | 70B        |
| `meta-llama/Llama-3.3-70B-Instruct`   | 70B        |

### DeepSeek

| Model                                      | Parameters |
| ------------------------------------------ | ---------- |
| `deepseek-ai/DeepSeek-R1-Distill-Qwen-32B` | 32B        |

### Mistral

| Model                                | Parameters |
| ------------------------------------ | ---------- |
| `mistralai/Mistral-7B-Instruct-v0.2` | 7B         |

### Code Models

| Model                              | Parameters |
| ---------------------------------- | ---------- |
| `codellama/CodeLlama-7b-hf`        | 7B         |
| `bigcode/starcoder2-3b`            | 3B         |
| `Qwen/Qwen2.5-Coder-7B-Instruct`   | 7B         |
| `Qwen/Qwen2.5-Coder-0.5B-Instruct` | 0.5B       |

### Other Models

| Model                                | Provider     | Parameters |
| ------------------------------------ | ------------ | ---------- |
| `01-ai/Yi-1.5-6B-Chat`               | 01.AI        | 6B         |
| `llm-jp/llm-jp-3-3.7b-instruct`      | LLM-JP       | 3.7B       |
| `TinyLlama/TinyLlama-1.1B-Chat-v1.0` | TinyLlama    | 1.1B       |
| `HuggingFaceTB/SmolLM-1.7B-Instruct` | Hugging Face | 1.7B       |
| `allenai/OLMo-1B-hf`                 | Allen AI     | 1B         |
| `facebook/opt-125m`                  | Meta         | 125M       |
| `stabilityai/stablelm-2-1_6b`        | Stability AI | 1.6B       |

### Using an Unlisted Model

REE is not limited to the models above. Any Hugging Face model that is compatible with the ONNX export pipeline may work. To try an unlisted model, simply enter its Hugging Face model ID in the [**Model Name** field in the TUI](https://docs.gensyn.ai/tech/ree/using-the-tui) (e.g., `organization/model-name`) and run it.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.gensyn.ai/tech/ree/supported-models.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
