Supported Models

A list of models verified to work with REE.

Verified-Compatible Models

The following models have been verified to work with REE. Any Hugging Face model compatible with the system may work, but this list represents models that have been explicitly tested.

If you have a specific model in mind that doesn't work with REE, you can reach out to the Gensyn team by creating an issue in the GitHub repositoryarrow-up-right and we'll do our best to support it.

Qwen

Model
Parameters

Qwen/Qwen3-32B

32B

Qwen/Qwen3-8B

8B

Qwen/Qwen3-4B

4B

Qwen/Qwen3-1.7B

1.7B

Qwen/Qwen3-0.6B

0.6B

Qwen/Qwen2.5-32B-Instruct

32B

Qwen/Qwen2.5-14B-Instruct

14B

Qwen/Qwen2.5-7B-Instruct

7B

Qwen/Qwen2.5-7B

7B

Qwen/Qwen2.5-3B-Instruct

3B

Qwen/Qwen2.5-0.5B-Instruct

0.5B

Qwen/Qwen2.5-0.5B

0.5B

Qwen/Qwen2.5-Coder-7B-Instruct

7B

Qwen/Qwen2.5-Coder-0.5B-Instruct

0.5B

Qwen/Qwen2-1.5B-Instruct

1.5B

Meta Llama

Model
Parameters

meta-llama/Llama-3.1-8B-Instruct

8B

meta-llama/Llama-3.1-8B

8B

meta-llama/Meta-Llama-3-8B

8B

meta-llama/Meta-Llama-3-8B-Instruct

8B

meta-llama/Llama-3.2-3B-Instruct

3B

meta-llama/Llama-3.2-1B-Instruct

1B

meta-llama/Llama-3.2-1B

1B

DeepSeek

Model
Parameters

deepseek-ai/DeepSeek-R1-Distill-Qwen-32B

32B

Mistral

Model
Parameters

mistralai/Mistral-7B-Instruct-v0.2

7B

Code Models

Model
Parameters

codellama/CodeLlama-7b-hf

7B

bigcode/starcoder2-3b

3B

Qwen/Qwen2.5-Coder-7B-Instruct

7B

Qwen/Qwen2.5-Coder-0.5B-Instruct

0.5B

Other Models

Model
Provider
Parameters

01-ai/Yi-1.5-6B-Chat

01.AI

6B

llm-jp/llm-jp-3-3.7b-instruct

LLM-JP

3.7B

TinyLlama/TinyLlama-1.1B-Chat-v1.0

TinyLlama

1.1B

HuggingFaceTB/SmolLM-1.7B-Instruct

Hugging Face

1.7B

allenai/OLMo-1B-hf

Allen AI

1B

facebook/opt-125m

Meta

125M

stabilityai/stablelm-2-1_6b

Stability AI

1.6B

Using an Unlisted Model

REE is not limited to the models above. Any Hugging Face model that is compatible with the ONNX export pipeline may work. To try an unlisted model, simply enter its Hugging Face model ID in the Model Name field in the TUI (e.g., organization/model-name) and run it.